How to install .d files of a D library using automake? - installation

What is the correct way to install (system-wide) a D library (on a GNU system, at least) using Makefile.am?
Here is my code to install the static and shared libraries:
install-data-local:
install librdf_dlang.a librdf_dlang.so $(libdir)
The remaining question is how to install .d files for developers to use my library?
Particularly, what should be the installation directory for .d files?

If you are doing a system-wide installation of D libraries and source (interface files I presume), then the most common places are /usr/include/<project name> or /usr/local/include/<project name> as long as it does not clash with some existing C/C++ project that stores header files there. Some D programmers prefer /usr/include/d/ or /usr/local/include/d/ as well...
I for an example use /usr/di (D imports) for this purpose and my library projects have all their interface files there. I will explain why I do not like to have separate project directories there.
No matter what directory you pick, you need to update your compiler search paths.
Here is a part of my dmd.conf:
[Environment64]
DFLAGS=-I/usr/include/dmd/phobos -I/usr/include/dmd/druntime/import -I/usr/di -L-L/usr/lib64 -L--export-dynamic -fPIC
, and ldc2.conf looks like:
// default switches appended after all explicit command-line switches
post-switches = [
"-I/usr/include/d/ldc",
"-I/usr/include/d",
"-I/usr/di",
"-L-L/usr/lib64",
];
If you prefer to have a separate directory for every project, you would end up with -I<path> for each of them. - I really do not like this approach. However, it is very popular among developers so it is really up to you how to organise the D import files. I know how much developers dislike the Java approach with domain.product.packages, but this nicely fits into a single place where all D interface files are and most importantly there are no clashes because of the domain/product part...

According to Filesystem Hierarchy Standard (and e.g. this SO question)
/usr/local/include
looks a strong candidate on a "linux/unix-like system". See especially note 9:
Historically and strictly according to the standard, /usr/local is for data that must be stored on the local host (as opposed to /usr, which may be mounted across a network). Most of the time /usr/local is used for installing software/data that are not part of the standard operating system distribution (in such case, /usr would only contain software/data that are part of the standard operating system distribution). It is possible that the FHS standard may in the future be changed to reflect this de facto convention.
I have no idea about Windows.

Related

Make: Prioritize -L (or: Ignore contents delivered by pkg-config)

I want to make a library that depends on other libraries.
I have been able to make the static .a files of the dependencies and have them along with the header files readily available in a directory. Running them through file confirms that I have successfully compiled these for all architectures.
When I try to make the final library, it tells me
ld: warning: ignoring file /usr/local....dylib, building for architecture-A but attempting to link with file built for architecture-B
It is correct that the library under the mentioned path is only compiled for the host architecture A (installed via package manager). However, in the LDFLAGS I have -L${libdir}/libs (the folder where the libs are) but make only seems to care about the ones in my usr/local/..folder.
Are there other ways to specifically point make to check the {libdir}/libs folder or even make make ignore the paths from pkg-config in case it searches there first, finds the unfit files and never gets to try the ones I passed in my LDFLAGS?
You write ...
I have been able to make the static .a files of the dependencies and have them along with the header files readily available in a directory.
... but this is probably irrelevant because you seem to be trying to build a shared (i.e. dynamic) library. Static libraries and shared ones don't mix very well.
Are there other ways to specifically point make to check the {libdir}/libs folder or even make make ignore the paths from pkg-config in case it searches there first, finds the unfit files and never gets to try the ones I passed in my LDFLAGS?
You are focusing on make, but make doesn't have much to do with it. It is the linker, not make, that performs the search and the actual link. make just executes the link command you told it to execute.
But yes, you can control the linker's library search order by controlling the order of its command-line options. Library directories specified via -L options are searched in the order they appear on the command line, and all of them before the linker's default library directories.* If ensuring a proper order of arguments does not get you the link you want then it is very likely because the linker is ignoring your static libraries because it is trying to build a dynamic one.
However you should be able to bypass the search altogether by specifying a full path and filename of the library you want to link instead of using -L or -l options. For example, instead of -L/path/to -lfoo, you might use /path/to/libfoo.dylib (or /path/to/libfoo.a). You don't normally want to hardcode paths like that, but in this case it might serve a diagnostic purpose to do so.
Note also that it is rarely a good idea to link against dynamic libraries that are not installed in their intended location, especially if the libraries are not part of the same project. It may seem at first to work ok, but it contributes to problems with finding the libraries at runtime (and dynamic libraries do need to be found at runtime, too). The same does not apply to static libraries, but that comes with its own set of advantages and disadvantages.
* There's more to it than that, but this answer is already long. Read the linker docs if you want more detail.

DESTDIR vs prefix options in a build system?

Can someone explain me the purpose of a $(DESTDIR) variable in a build system?
I mean that I know that it points a temporary directory for currently installed package, but I can not imagine what is practical use of it.
To clarify, well, I know what --prefix option is, for instance if we point the buldsystem like ./configure --prefix="/usr" all the package's files will belong /usr, like /usr/lib, /usr/share and so on, but in Makefiles I've also seen the following constructions:
$(DESTDIR)/$(prefix)
And what's id purpose of that? In short, is there a difference between DESTDIR and prefix and when should both be used?
Yes, there's a very important difference... in some environments.
The prefix is intended to be the location where the package will be installed (or appear to be installed) after everything is finalized. For example, if there are hardcoded paths in the package anywhere they would be based on the prefix path (of course, we all hope packages avoid hardcoded paths for many reasons).
DESTDIR allows people to actually install the content somewhere other than the actual prefix: the DESTDIR is prepended to all prefix values so that the install location has exactly the same directory structure/layout as the final location, but rooted somewhere other than /.
This can be useful for all sorts of reasons. One example are facilities like GNU stow which allow multiple instances to be installed at the same time and easily controlled. Other examples are creating package files using RPM or DEB: after the package is installed you want it unpacked at root, but in order to create the package you need it installed at some other location.
And other people use it for their own reasons: basically it all boils down to DESTDIR is used to create a "staging area" for the installation, without actually installing into the final location.
Etc.

Does 'make'-ing something from source make it self-contained?

Forgive me before I start, as I'm not a C / C++ etc programmer, a mere PHP one :) but I've been working on projects that use some others sourced from online open source repos, such as svn and git. For some of these projects, I need to install libraries and then run "./configure", "make" and then "make all" (as an example) and I do this on a "build" virtual machine to get the binaries that I need to use within my project.
The ultimate goal of some of my projects is to then take these "compiled" (if that's the correct term) binaries and place them onto a virtual machine which I would then re-distribute (according to licenses etc).
My question is this : when I build these binaries on my build machine, with all the pre-requisities that I need in order to build them in the first place ("build-essential" and "cmake" and "gcc" etc etc) - once the binaries are on my build machine (in /usr/lib for example) are they self-contained to the point that I can merely copy those /usr/lib binary files that the build created and place them in the same folder on the virtual machines that I would distribute, without the build servers having all the build components installed on them?
With all the dependencies that I would need to build the source in the place, would that finally built binary contain them all in itself, or would I have to include them on the distributed servers as well?
Would that work? Is the question a little too general and perhaps it would all depend on what I'm building?
Update from original posting after a couple of responses
I will be distributing the VMs myself, inasmuch as I will build them and then install my projects upon them. Therefore, I know the OS and environment completely. I just don't want to "bloat" them with unnecessary software that's been installed that I don't actually need because the compiled executables I will place on the distributed VMs in for example /usr/local/bin ...
That depends on how you link your program to libraries it depends on. In most cases, the default is to link dynamically, which means that you need to distribute your executable along its deps. You can check out what libraries are required to run the file using ldd command.
Theoretically, you can link everything statically, which means that library code would be compiled into executable. Thus, executable would really be self-contained, but linking statically is not always possible. This depends on actual libraries you are using and probably require playing with ./configure args when building them.
Finally, there are some liraries that always linked dynamically, such as libc. The good thing is that machine you are distributing to would surely have this library. The bad thing is that versions of these libraries may differ, and you might face ABI mismatch.
In short, if your project not huge and there is possibility to link everything statically, go this way. If not, read about AppImage and Docker.
The distribution of built libraries and headers (binary distribution) is a possible way and should work. (I do it in my projects always.)
It is not necessary that all of the libraries you built are installed into /usr/lib. To keep your target machine clean you can install it in other folder to, e.g.
/usr/local/MYLIB/lib/libmylib.so
/usr/local/MYLIB/include/mylib.h
/usr/local/MYOTHERLIB/lib/libmyotherlib.so
/usr/local/MYOTHERLIB/include/libmyotherlib.so
Advantages:
Easy installation, easy remove
All files within one subfolder, no files are missing, no mix with other libs
Disadvantage:
The loader must know the extra search path

Effective way of distributing go executable

I have a go app which relies heavily on static resources like images and jars. I want to install that go executable in different platforms like linux, mac and windows.
I first thought of bundling the resources using https://github.com/jteeuwen/go-bindata, but since the files(~100) have size ~ 20MB or so, it takes a really long time to build the executable. I thought having a single executable is an easy way for people to download the executable and run it. But seems like that is not an effective way.
I then thought of writing a installation package for each of the platform like creating a .rpm or .deb packages? So these packages contain all the resources and puts it into some platform specific pre defined locations and the go executable can reference them. But the only thing is that I have to handle that in the go code. I have to see if it is windows then load the files from say c:\go-installs or if it is linux then load the files from say /usr/local/share/go-installs. I want the go code to be as platform agnostic as it can be.
Or is there some other strategy for this?
Thanks
Possibly does not qualify as real answer but still…
As to your point №2, one way to handle this is to exploit Go's way to do conditional compilation: you might create a set of files like res_linux.go, res_windows.go etc and put a set of the same variables in each, pointing to different locations, like
var InstallsPath = `C:\go-installs`
in res_windows.go and
var InstallsPath = `/usr/share/myapp`
in res_linux.go and so on. Then in the rest of the program just reference the res.InstallsPath variable and use the path/filepath package to construct full pathnames of actual resources.
Of course, another way to go is to do a runtime switch on runtime.GOOS variable—possibly in an init() function in one of the source files.
Pack everything in a zip archive and read your resource files from it using archive/zip. This way you'll have to distribute just two files—almost "xcopy deployment".
Note that while on Windows you could just have your executable extract the directory from the pathname of itself (os.Args[0]) and assume the resource file is located in the same directory, on POSIX platforms (GNU/Linux and *BSD etc) the resource file should still be located under /usr/share/myapp or a similar place dictated by FHS (or particular distro's rules), so some logic to locate that file will still be required.
All in all, if this is supposed to be a piece of FOSS, I'd go with the first variant to let the downstream packagers tweak the pathnames. If this is a proprietary (or just niche) software the second idea appears to be rather OK as you'll play the role of downstream packagers yourself.

question about include and library files

Hi I noticed that in the Linux file system we have 4 folders
Libraries
/usr/local/lib
/usr/lib
Include files
/usr/local/include
/usr/include
Now I know that while writing a C program the compiler checks these standard folders for libraries and include files in the order mentioned above.
I wanted to know why have two folders for each; 2 for lib and 2 for include. Why not just have one for each? What is the reason for this division?
Thank you.
See this pub (search for /usr/local):
http://www.pathname.com/fhs/pub/fhs-2.3.html
The /usr/local hierarchy is for use by the system administrator when installing software locally. It needs to be safe from being overwritten when the system software is updated. It may be used for programs and data that are shareable amongst a group of hosts, but not found in /usr.
For a general overview consult Wikipedia:
http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
Usually because /usr/lib/ and /usr/includes/ are used as the main repository for the system-wide libraries and includes while the more specific /usr/local/lib and /usr/local/includes are filled by users that need to install additional libraries/headers.
This should mean that the latter ones start empty with a new OS installation and ready to be filled by custom libraries while the system ones are already full of standard libraries. In this way when you perform a system update the local folders should be kept untouched while the system-wide one are updated..

Resources