DESTDIR vs prefix options in a build system? - makefile

Can someone explain me the purpose of a $(DESTDIR) variable in a build system?
I mean that I know that it points a temporary directory for currently installed package, but I can not imagine what is practical use of it.
To clarify, well, I know what --prefix option is, for instance if we point the buldsystem like ./configure --prefix="/usr" all the package's files will belong /usr, like /usr/lib, /usr/share and so on, but in Makefiles I've also seen the following constructions:
$(DESTDIR)/$(prefix)
And what's id purpose of that? In short, is there a difference between DESTDIR and prefix and when should both be used?

Yes, there's a very important difference... in some environments.
The prefix is intended to be the location where the package will be installed (or appear to be installed) after everything is finalized. For example, if there are hardcoded paths in the package anywhere they would be based on the prefix path (of course, we all hope packages avoid hardcoded paths for many reasons).
DESTDIR allows people to actually install the content somewhere other than the actual prefix: the DESTDIR is prepended to all prefix values so that the install location has exactly the same directory structure/layout as the final location, but rooted somewhere other than /.
This can be useful for all sorts of reasons. One example are facilities like GNU stow which allow multiple instances to be installed at the same time and easily controlled. Other examples are creating package files using RPM or DEB: after the package is installed you want it unpacked at root, but in order to create the package you need it installed at some other location.
And other people use it for their own reasons: basically it all boils down to DESTDIR is used to create a "staging area" for the installation, without actually installing into the final location.
Etc.

Related

How to install .d files of a D library using automake?

What is the correct way to install (system-wide) a D library (on a GNU system, at least) using Makefile.am?
Here is my code to install the static and shared libraries:
install-data-local:
install librdf_dlang.a librdf_dlang.so $(libdir)
The remaining question is how to install .d files for developers to use my library?
Particularly, what should be the installation directory for .d files?
If you are doing a system-wide installation of D libraries and source (interface files I presume), then the most common places are /usr/include/<project name> or /usr/local/include/<project name> as long as it does not clash with some existing C/C++ project that stores header files there. Some D programmers prefer /usr/include/d/ or /usr/local/include/d/ as well...
I for an example use /usr/di (D imports) for this purpose and my library projects have all their interface files there. I will explain why I do not like to have separate project directories there.
No matter what directory you pick, you need to update your compiler search paths.
Here is a part of my dmd.conf:
[Environment64]
DFLAGS=-I/usr/include/dmd/phobos -I/usr/include/dmd/druntime/import -I/usr/di -L-L/usr/lib64 -L--export-dynamic -fPIC
, and ldc2.conf looks like:
// default switches appended after all explicit command-line switches
post-switches = [
"-I/usr/include/d/ldc",
"-I/usr/include/d",
"-I/usr/di",
"-L-L/usr/lib64",
];
If you prefer to have a separate directory for every project, you would end up with -I<path> for each of them. - I really do not like this approach. However, it is very popular among developers so it is really up to you how to organise the D import files. I know how much developers dislike the Java approach with domain.product.packages, but this nicely fits into a single place where all D interface files are and most importantly there are no clashes because of the domain/product part...
According to Filesystem Hierarchy Standard (and e.g. this SO question)
/usr/local/include
looks a strong candidate on a "linux/unix-like system". See especially note 9:
Historically and strictly according to the standard, /usr/local is for data that must be stored on the local host (as opposed to /usr, which may be mounted across a network). Most of the time /usr/local is used for installing software/data that are not part of the standard operating system distribution (in such case, /usr would only contain software/data that are part of the standard operating system distribution). It is possible that the FHS standard may in the future be changed to reflect this de facto convention.
I have no idea about Windows.

A Chicken Scheme equivalent to Python's virtualenv?

Is there a way to create an equivalent of Python's virtual environments (virtualenv)? With virtualenvs, one can install Python packages inside the virtual environment (a separate directory) without messing up the global python environment. One can remove packages that one decides they don't need without worrying about removing a package that is depended upon by another Python project. I'm sure there are other benefits that I'm not thinking of at the moment. I notice that when I use chicken-install, it installs all of the eggs in my /usr/local/Cellar/chicken/4.12.0/lib/chicken/8/ dir. Is there a way to have them install that egg in a project specific directory similarly to how Python's virtualenv works?
There isn't really such a thing in CHICKEN 4. The problem here is that installing eggs to a different location is one part, the other is running programs so that they look up eggs in that location. You can emulate it by using something along these lines:
export LOCAL_EGGS=/path/to/project/local
chicken-install -init $LOCAL_EGGS
export CHICKEN_REPOSITORY=$LOCAL_EGGS
chicken-install r7rs ...
csc ...
The easiest way to do this is to simply install CHICKEN to a different location using the PREFIX option to make when building it (see the README for instructions). This allows you to have a CHICKEN specifically built for each of your projects. I vastly prefer this option over the others because it is very easy to understand, and CHICKEN itself is very fast to build and not very big, so I find the overhead of doing this quite acceptable.
Alternatively, use what wasamasa proposed, or use the -deploy option to install eggs with the program. See the deployment chapter in the manual for more info.
Actually you don't need any "virtual environment" - everything already in place.
There is straightforward way to change repository location:
CHICKEN_INSTALL_REPOSITORY is the place where eggs will be installed and which the egg-related tools like chicken-install, chicken-uninstall and chicken-status consult and update. Make sure the paths given in these environment variables are absolute and not relative.
and
CHICKEN_REPOSITORY_PATH is a directory (or a list of directories separated by :/;) where eggs are to be loaded from for all chicken-based programs.
Point CHICKEN_INSTALL_REPOSITORY to the location where you want it to be. Note that you need to point CHICKEN_REPOSITORY_PATH to your local repository as well as system one in order to be able to import extensions distributed with Chicken system.
Also you most likely need to setup installation prefix:
An alternative installation prefix that will be prepended to extension installation paths if specified. It is set by the -prefix option or environment variable CHICKEN_INSTALL_PREFIX.
and update your PATH:
PATH="$CHICKEN_INSTALL_PREFIX/bin:$PATH"
This allows you to install extensions which provide console programs.
The only thing left to do is export all these variables.
That's it!
This is basically what Python's virtualenv activate script does in essence. As you can see, this is very simple steps to do. You don't need a dedicated tool to manage that. A very simple shell script can serve very well.
How it works?
This works by introducing an one more depth level local hierarchy, (as it does for /usr and /usr/local, please, see FHS). If you wonder what the hell is local hierarchy, take a look at your $HOME/.local - probably you have something interesting inside.
Bonus
As far as setting up per-project extensions repository involves only environment modification, this definitely can be automated. There is very handy tool to solve this kind of problems in general: direnv. Using this simple function in your $HOME/.envrc:
use_chicken() {
LOCAL=$(expand_path .local)
system_repository=$(chicken-install -repository)
binary_version=${system_repository##*/}
local_repository=${LOCAL}/lib/chicken/${binary_version}
path_add CHICKEN_REPOSITORY_PATH ${system_repository}
path_add CHICKEN_REPOSITORY_PATH ${local_repository}
export CHICKEN_REPOSITORY_PATH
export CHICKEN_INSTALL_REPOSITORY=${local_repository}
export CHICKEN_INSTALL_PREFIX=${LOCAL}
PATH_add ${LOCAL}/bin
}
you can setup your Chicken project just with these two lines in .envrc inside project directory:
source_up
use chicken
I know https://github.com/ursetto/cenv exists (never used it myself), and it is for CHICKEN 5 only, though (it won't work with CHICKEN 4). Thought about mentioning it in case you plan to migrate to CHICKEN 5.

Effective way of distributing go executable

I have a go app which relies heavily on static resources like images and jars. I want to install that go executable in different platforms like linux, mac and windows.
I first thought of bundling the resources using https://github.com/jteeuwen/go-bindata, but since the files(~100) have size ~ 20MB or so, it takes a really long time to build the executable. I thought having a single executable is an easy way for people to download the executable and run it. But seems like that is not an effective way.
I then thought of writing a installation package for each of the platform like creating a .rpm or .deb packages? So these packages contain all the resources and puts it into some platform specific pre defined locations and the go executable can reference them. But the only thing is that I have to handle that in the go code. I have to see if it is windows then load the files from say c:\go-installs or if it is linux then load the files from say /usr/local/share/go-installs. I want the go code to be as platform agnostic as it can be.
Or is there some other strategy for this?
Thanks
Possibly does not qualify as real answer but still…
As to your point №2, one way to handle this is to exploit Go's way to do conditional compilation: you might create a set of files like res_linux.go, res_windows.go etc and put a set of the same variables in each, pointing to different locations, like
var InstallsPath = `C:\go-installs`
in res_windows.go and
var InstallsPath = `/usr/share/myapp`
in res_linux.go and so on. Then in the rest of the program just reference the res.InstallsPath variable and use the path/filepath package to construct full pathnames of actual resources.
Of course, another way to go is to do a runtime switch on runtime.GOOS variable—possibly in an init() function in one of the source files.
Pack everything in a zip archive and read your resource files from it using archive/zip. This way you'll have to distribute just two files—almost "xcopy deployment".
Note that while on Windows you could just have your executable extract the directory from the pathname of itself (os.Args[0]) and assume the resource file is located in the same directory, on POSIX platforms (GNU/Linux and *BSD etc) the resource file should still be located under /usr/share/myapp or a similar place dictated by FHS (or particular distro's rules), so some logic to locate that file will still be required.
All in all, if this is supposed to be a piece of FOSS, I'd go with the first variant to let the downstream packagers tweak the pathnames. If this is a proprietary (or just niche) software the second idea appears to be rather OK as you'll play the role of downstream packagers yourself.

go get with multiple projects in workspace

In Go, the workspace contains the src, pkg and bin directories. How do I create multiple projects in the workspace, each with its own src, pkg, bin directories, such that I can 'go get' packages into the pkg directory of a particular project.
You probably do not need that. Let's forget also the word "workspace" it's probably only confusing you.
If you set your GOPATH environment variable that that's all you actually need to have multiple projects independently sitting on you hard disk.
For example, having export GOPATH="$HOME", and performing
$ go get github.com/foo/bar
$ go get github.com/baz/qux
Your directory tree will be
$GOPATH/pkg...
compiled packages
$GOPATH/src/github.com/foo/bar
bar.go
$GOPATH/src/github.com/baz/qux
qux.go
More details here. Note that it does talk about workspaces, but I still consider that fact very unfortunate. The earlier versions of that doc did not use nor define the concept and they were useful anyway. That's IMO a proof of it (the workspace) being redundant.
go get is not intended to be used that way.
all go get packages land in $GOPATH/* as described here: http://golang.org/doc/code.html#remote and there is no concept of separate workspaces.
If you really want several "workspaces": Have several entries in GOPATH (separated by ":" on unix).
(But most just keep everything under one GOPATH).
Remember that go get fetches packages only into your first GOPATH entry.
The other entries can be used as "seperate workspaces".

Easy way to change .deb build prefix?

I am trying to make a live CD for simplifying chrooting into unbootable Linux systems for users, as many unbootable Linux issues could be fixed with chroot, but many users probably don't understand the concept of chroot.
One of the abilities I want to add is the ability to temporarily import some utilities from the Live CD into the target system, so that they can be used as if they where installed, to do configuration tasks.
The problem is is that I can't seem to work around the apps trying to search for stuff in /usr/share when they are imported. (I already have a hacky workaround for /usr/lib using patchelf...) I would do a union mount on the /usr/share's, but that could confuse some package managers when they see files that should not be there, as the user might need to run a package manager to fix the broken system. (or at least I think it could confuse package managers).
I'm trying to see if I can create a script that will rebuild all packages to use a different build prefix instead of /usr. The script can rebuild packages with apt-get build-dep/apt-get source/debbuild, but it can't change the prefix.
Question: Is there a way to pass an argument to debuild or dpkg-buildpackage to change the build prefix?
Right now it seems I have to take a look at the contents of the source (from apt-get source) for every package and see what files are specifying /usr and figure out a way to change it for every one, but I have a feeling I'm missing something obvious...
Is this possible?
I don't think this is feasible. Why don't you mount in a different location, for example /usr/local? That way, you also eliminate a source for possible conflicts.
Still, some packages are full of hardcoded references to the location for their data files, for example.
I'll throw in a pointer to stow as well, although I imagine it's not really helpful for your scenario.

Resources