A Chicken Scheme equivalent to Python's virtualenv? - chicken-scheme

Is there a way to create an equivalent of Python's virtual environments (virtualenv)? With virtualenvs, one can install Python packages inside the virtual environment (a separate directory) without messing up the global python environment. One can remove packages that one decides they don't need without worrying about removing a package that is depended upon by another Python project. I'm sure there are other benefits that I'm not thinking of at the moment. I notice that when I use chicken-install, it installs all of the eggs in my /usr/local/Cellar/chicken/4.12.0/lib/chicken/8/ dir. Is there a way to have them install that egg in a project specific directory similarly to how Python's virtualenv works?

There isn't really such a thing in CHICKEN 4. The problem here is that installing eggs to a different location is one part, the other is running programs so that they look up eggs in that location. You can emulate it by using something along these lines:
export LOCAL_EGGS=/path/to/project/local
chicken-install -init $LOCAL_EGGS
export CHICKEN_REPOSITORY=$LOCAL_EGGS
chicken-install r7rs ...
csc ...

The easiest way to do this is to simply install CHICKEN to a different location using the PREFIX option to make when building it (see the README for instructions). This allows you to have a CHICKEN specifically built for each of your projects. I vastly prefer this option over the others because it is very easy to understand, and CHICKEN itself is very fast to build and not very big, so I find the overhead of doing this quite acceptable.
Alternatively, use what wasamasa proposed, or use the -deploy option to install eggs with the program. See the deployment chapter in the manual for more info.

Actually you don't need any "virtual environment" - everything already in place.
There is straightforward way to change repository location:
CHICKEN_INSTALL_REPOSITORY is the place where eggs will be installed and which the egg-related tools like chicken-install, chicken-uninstall and chicken-status consult and update. Make sure the paths given in these environment variables are absolute and not relative.
and
CHICKEN_REPOSITORY_PATH is a directory (or a list of directories separated by :/;) where eggs are to be loaded from for all chicken-based programs.
Point CHICKEN_INSTALL_REPOSITORY to the location where you want it to be. Note that you need to point CHICKEN_REPOSITORY_PATH to your local repository as well as system one in order to be able to import extensions distributed with Chicken system.
Also you most likely need to setup installation prefix:
An alternative installation prefix that will be prepended to extension installation paths if specified. It is set by the -prefix option or environment variable CHICKEN_INSTALL_PREFIX.
and update your PATH:
PATH="$CHICKEN_INSTALL_PREFIX/bin:$PATH"
This allows you to install extensions which provide console programs.
The only thing left to do is export all these variables.
That's it!
This is basically what Python's virtualenv activate script does in essence. As you can see, this is very simple steps to do. You don't need a dedicated tool to manage that. A very simple shell script can serve very well.
How it works?
This works by introducing an one more depth level local hierarchy, (as it does for /usr and /usr/local, please, see FHS). If you wonder what the hell is local hierarchy, take a look at your $HOME/.local - probably you have something interesting inside.
Bonus
As far as setting up per-project extensions repository involves only environment modification, this definitely can be automated. There is very handy tool to solve this kind of problems in general: direnv. Using this simple function in your $HOME/.envrc:
use_chicken() {
LOCAL=$(expand_path .local)
system_repository=$(chicken-install -repository)
binary_version=${system_repository##*/}
local_repository=${LOCAL}/lib/chicken/${binary_version}
path_add CHICKEN_REPOSITORY_PATH ${system_repository}
path_add CHICKEN_REPOSITORY_PATH ${local_repository}
export CHICKEN_REPOSITORY_PATH
export CHICKEN_INSTALL_REPOSITORY=${local_repository}
export CHICKEN_INSTALL_PREFIX=${LOCAL}
PATH_add ${LOCAL}/bin
}
you can setup your Chicken project just with these two lines in .envrc inside project directory:
source_up
use chicken

I know https://github.com/ursetto/cenv exists (never used it myself), and it is for CHICKEN 5 only, though (it won't work with CHICKEN 4). Thought about mentioning it in case you plan to migrate to CHICKEN 5.

Related

DESTDIR vs prefix options in a build system?

Can someone explain me the purpose of a $(DESTDIR) variable in a build system?
I mean that I know that it points a temporary directory for currently installed package, but I can not imagine what is practical use of it.
To clarify, well, I know what --prefix option is, for instance if we point the buldsystem like ./configure --prefix="/usr" all the package's files will belong /usr, like /usr/lib, /usr/share and so on, but in Makefiles I've also seen the following constructions:
$(DESTDIR)/$(prefix)
And what's id purpose of that? In short, is there a difference between DESTDIR and prefix and when should both be used?
Yes, there's a very important difference... in some environments.
The prefix is intended to be the location where the package will be installed (or appear to be installed) after everything is finalized. For example, if there are hardcoded paths in the package anywhere they would be based on the prefix path (of course, we all hope packages avoid hardcoded paths for many reasons).
DESTDIR allows people to actually install the content somewhere other than the actual prefix: the DESTDIR is prepended to all prefix values so that the install location has exactly the same directory structure/layout as the final location, but rooted somewhere other than /.
This can be useful for all sorts of reasons. One example are facilities like GNU stow which allow multiple instances to be installed at the same time and easily controlled. Other examples are creating package files using RPM or DEB: after the package is installed you want it unpacked at root, but in order to create the package you need it installed at some other location.
And other people use it for their own reasons: basically it all boils down to DESTDIR is used to create a "staging area" for the installation, without actually installing into the final location.
Etc.

gentoo: how delete all config files on unmerging package (from its ebuild)

I am making my own personal package to have collection of usefull programs and configs. Main idea is to emerge this package and have system prepared for my prefferencies. Mainly it works (it simply depends on all my favourite programs), but I have two problems here:
how to install USE flags, UNMASK and such before affected programs are installed?
how to uninstall it (emerge --unmerge does NOT delete files in /etc, so even after uninstalling the package the USE flags (and others) are still kept - my intent is to REMOVE them, so next rebuild of world would NOT use them anymore - yes it means a lot of programs would lose some functionalities like support for some languages, support for some other programs and so on, it is desired result)
My solutions so far are:
The package have some files in /etc/portage/package.*
1.1. I emerge that package with --nodeps (so the config files are installed)
1.2. I emerge it again without that flag (so dependencies are installed
with right configuration))
I create (and install) script to parse /var/db/packages for my package CONTENTS and delete all /etc/portage/something files "manually" and I have to rum this script before unmerging the package
Is there better way to do it ?
You just doing/understanding it wrong! (sorry :)
First of all, instead of a metapackage (an empty ebuild that have only runtime dependencies) there is other ways:
use sets to describe your preferred packages. Manage your USE flags in a usual way (including per package USE if needed).
medium complexity solution is to write a metapackage ebuild (your current case) -- but, you can't mask/unmask USE flags anyway…
if you already have your overlay (obviously) -- defining your own profile would solve everything! Here you can manage everything just like you want: mask/unmask any USE flags, define what is system predefined package means for you, & etc…
Unfortunately, I don't use Gentoo portage (and emerge) and have no idea if it's possible to have multiple additive profiles. I have my own profiles here and it works perfectly with Paludis.
Second, never remove any configuration files (config-protected) after uninstall! There is no packages that do that, and there is a bunch of reasons for that… The main one is that user may have them modified and don't want to loose his changes. Moreover, personally I prefer to have all configs that I've ever touched to be in a dedicated VCS repo -- it wouldn't be nice, if someone, except me, would remove smth…
Imagine a real life example: user wants to reinstall some package and he has a bunch of configuration files, he spent some time to carefully edit them. Trivial way is to uninstall and then install again -- Oops! He lost his configs!
Moreover, from ebuild's POV, you have pkg_prerm and pkg_postrm functions, but both of them are called even at upgrade time (i.e. when unmerge followed by immediate merge phase). You have to be really careful to distinct that use cases… And what is more scare, having any "hardcoded" (and unique) rules in any package, you don't have any influence on them…
So, please, never remove any config protected files, let the user to take care of them (he is the boss, not a package manager)…
Update: If you really want to be able to remove some config-protected files, setting up your own profile looks even more better idea. You can set CONFIG_PROTECT_MASK to enforce unprotect files and/or directories. In that way you don't need to modify any ebuilds and/or write an ugly cleanup code.

Where should I put a template folder for a bash script?

I'm on OS-X (Mavericks, if that matters), and I'm making a bash script that will use resources from a folder called "templates". I'm trying to figure out where I should put it (the templates folder). I'd like to make it so the user doesn't need to modify their path when they install it, so I'd rather not do it the way the terminal mysql command does it (it lives in a folder in /usr/local/mysql/bin). I really want to be able to put them into usr/bin, but I don't know if it's "polite" to put folders in there (I don't see any in there).
Right now I'm leaning towards putting the scripts in usr/bin and having the templates in usr/lib. Is that how this type of thing is normally done, or is there another way? I'd like to follow a convention, assuming there is one. I'd also like it to apply to as many Unix platforms as possible (I'd like to put in a directory where bash scripts live that's consistent across as many Unix platforms as possible). Thanks.
If you follow the Filesystem Hierarchy Standard (FHS), your executable goes in /usr/local/bin, while read-only template files go in /usr/local/share/YOURAPP/. To quote the FHS:
/usr/local/share
The requirements for the contents of this directory are the same as /usr/share. […]
and:
The /usr/share hierarchy is for all read-only architecture independent data files.
(Emphasis added)
If the system admin is meant to customize the template files to take effect system-wide, then they would simply go in /etc/YOURAPP/templates (or something like that).
If the template files are customized on a per-user basis, then the modified copies of the templates (copied from /usr/local/share/YOURAPP/templates) need to be saved in the user's directory, under $HOME/.config/YOURAPP/templates or something like that (thanks to technosaurus for the correction).
You mentioned that you want to install the templates in a directory alongside your executable. That is not the standard approach on UNIX, at least going by the FHS. If you really want to go this route, there is a sort of convention of installing your app to /opt/YOURAPP/, using whatever organization you want inside that folder.
In all cases, it is not good practice to install executables directly to /usr/bin, as that directory is considered to be under the exclusive control of the OS/distribution. If you want to install there, the accepted way to do that is to create a package for the package manager of every supported OS/distribution.

Clean installation of a MATLAB library on Windows

I would like to create an installer for a library (DLL) that can be use in multiple system including MATLAB.
For MATLAB, I have additional *.m and *.mex files to make the DLL functions easily accessible from it.
I also have an installer that modify the PATH environment variable to make my DLL visible to all potential calling system.
My problem is that MATLAB does not make use of the system PATH environment variable. Thus, I am looking for a fix that would allow users of my library to run the installer and have my library accessible from MATLAB "out of the box" (possibly after a restart or session reopen).
I currently see 2 ways to do it which I both do not like :
Write a MATLAB script that uses addpath()/savepath() functions. I don't like this because :
a. MATLAB may not always be installed.
b. This would mess with the user's MATLAB's own path variable.
c. Upon installing a new version of my library, I would have to mess even more with the MATLAB's path to delete the path to older library before adding the path to the newer library.
Look through the system PATH and search for ...\MATLAB\RXXXXn\bin path to use that to install my *.m and *.mex files in the appropriate folder inside MATLAB. I don't like this because :
a. It would mess with MATLAB's own installation.
b. Once again, installation of multiple successive versions of the library may cause some issues (currently multiple version may be installed in separate directory, and the PATH redirects to the last one installed, experts users can modify the PATH according to their needs).
Currently, I am leaning towards option 2, but I am looking for a better, cleaner solution to this installation procedure.
Can anyone give me some MATLAB's expert advice ?
Thanks in advance for your help!

Easy way to change .deb build prefix?

I am trying to make a live CD for simplifying chrooting into unbootable Linux systems for users, as many unbootable Linux issues could be fixed with chroot, but many users probably don't understand the concept of chroot.
One of the abilities I want to add is the ability to temporarily import some utilities from the Live CD into the target system, so that they can be used as if they where installed, to do configuration tasks.
The problem is is that I can't seem to work around the apps trying to search for stuff in /usr/share when they are imported. (I already have a hacky workaround for /usr/lib using patchelf...) I would do a union mount on the /usr/share's, but that could confuse some package managers when they see files that should not be there, as the user might need to run a package manager to fix the broken system. (or at least I think it could confuse package managers).
I'm trying to see if I can create a script that will rebuild all packages to use a different build prefix instead of /usr. The script can rebuild packages with apt-get build-dep/apt-get source/debbuild, but it can't change the prefix.
Question: Is there a way to pass an argument to debuild or dpkg-buildpackage to change the build prefix?
Right now it seems I have to take a look at the contents of the source (from apt-get source) for every package and see what files are specifying /usr and figure out a way to change it for every one, but I have a feeling I'm missing something obvious...
Is this possible?
I don't think this is feasible. Why don't you mount in a different location, for example /usr/local? That way, you also eliminate a source for possible conflicts.
Still, some packages are full of hardcoded references to the location for their data files, for example.
I'll throw in a pointer to stow as well, although I imagine it's not really helpful for your scenario.

Resources