Hi I noticed that in the Linux file system we have 4 folders
Libraries
/usr/local/lib
/usr/lib
Include files
/usr/local/include
/usr/include
Now I know that while writing a C program the compiler checks these standard folders for libraries and include files in the order mentioned above.
I wanted to know why have two folders for each; 2 for lib and 2 for include. Why not just have one for each? What is the reason for this division?
Thank you.
See this pub (search for /usr/local):
http://www.pathname.com/fhs/pub/fhs-2.3.html
The /usr/local hierarchy is for use by the system administrator when installing software locally. It needs to be safe from being overwritten when the system software is updated. It may be used for programs and data that are shareable amongst a group of hosts, but not found in /usr.
For a general overview consult Wikipedia:
http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
Usually because /usr/lib/ and /usr/includes/ are used as the main repository for the system-wide libraries and includes while the more specific /usr/local/lib and /usr/local/includes are filled by users that need to install additional libraries/headers.
This should mean that the latter ones start empty with a new OS installation and ready to be filled by custom libraries while the system ones are already full of standard libraries. In this way when you perform a system update the local folders should be kept untouched while the system-wide one are updated..
Related
I want to make a library that depends on other libraries.
I have been able to make the static .a files of the dependencies and have them along with the header files readily available in a directory. Running them through file confirms that I have successfully compiled these for all architectures.
When I try to make the final library, it tells me
ld: warning: ignoring file /usr/local....dylib, building for architecture-A but attempting to link with file built for architecture-B
It is correct that the library under the mentioned path is only compiled for the host architecture A (installed via package manager). However, in the LDFLAGS I have -L${libdir}/libs (the folder where the libs are) but make only seems to care about the ones in my usr/local/..folder.
Are there other ways to specifically point make to check the {libdir}/libs folder or even make make ignore the paths from pkg-config in case it searches there first, finds the unfit files and never gets to try the ones I passed in my LDFLAGS?
You write ...
I have been able to make the static .a files of the dependencies and have them along with the header files readily available in a directory.
... but this is probably irrelevant because you seem to be trying to build a shared (i.e. dynamic) library. Static libraries and shared ones don't mix very well.
Are there other ways to specifically point make to check the {libdir}/libs folder or even make make ignore the paths from pkg-config in case it searches there first, finds the unfit files and never gets to try the ones I passed in my LDFLAGS?
You are focusing on make, but make doesn't have much to do with it. It is the linker, not make, that performs the search and the actual link. make just executes the link command you told it to execute.
But yes, you can control the linker's library search order by controlling the order of its command-line options. Library directories specified via -L options are searched in the order they appear on the command line, and all of them before the linker's default library directories.* If ensuring a proper order of arguments does not get you the link you want then it is very likely because the linker is ignoring your static libraries because it is trying to build a dynamic one.
However you should be able to bypass the search altogether by specifying a full path and filename of the library you want to link instead of using -L or -l options. For example, instead of -L/path/to -lfoo, you might use /path/to/libfoo.dylib (or /path/to/libfoo.a). You don't normally want to hardcode paths like that, but in this case it might serve a diagnostic purpose to do so.
Note also that it is rarely a good idea to link against dynamic libraries that are not installed in their intended location, especially if the libraries are not part of the same project. It may seem at first to work ok, but it contributes to problems with finding the libraries at runtime (and dynamic libraries do need to be found at runtime, too). The same does not apply to static libraries, but that comes with its own set of advantages and disadvantages.
* There's more to it than that, but this answer is already long. Read the linker docs if you want more detail.
What is the correct way to install (system-wide) a D library (on a GNU system, at least) using Makefile.am?
Here is my code to install the static and shared libraries:
install-data-local:
install librdf_dlang.a librdf_dlang.so $(libdir)
The remaining question is how to install .d files for developers to use my library?
Particularly, what should be the installation directory for .d files?
If you are doing a system-wide installation of D libraries and source (interface files I presume), then the most common places are /usr/include/<project name> or /usr/local/include/<project name> as long as it does not clash with some existing C/C++ project that stores header files there. Some D programmers prefer /usr/include/d/ or /usr/local/include/d/ as well...
I for an example use /usr/di (D imports) for this purpose and my library projects have all their interface files there. I will explain why I do not like to have separate project directories there.
No matter what directory you pick, you need to update your compiler search paths.
Here is a part of my dmd.conf:
[Environment64]
DFLAGS=-I/usr/include/dmd/phobos -I/usr/include/dmd/druntime/import -I/usr/di -L-L/usr/lib64 -L--export-dynamic -fPIC
, and ldc2.conf looks like:
// default switches appended after all explicit command-line switches
post-switches = [
"-I/usr/include/d/ldc",
"-I/usr/include/d",
"-I/usr/di",
"-L-L/usr/lib64",
];
If you prefer to have a separate directory for every project, you would end up with -I<path> for each of them. - I really do not like this approach. However, it is very popular among developers so it is really up to you how to organise the D import files. I know how much developers dislike the Java approach with domain.product.packages, but this nicely fits into a single place where all D interface files are and most importantly there are no clashes because of the domain/product part...
According to Filesystem Hierarchy Standard (and e.g. this SO question)
/usr/local/include
looks a strong candidate on a "linux/unix-like system". See especially note 9:
Historically and strictly according to the standard, /usr/local is for data that must be stored on the local host (as opposed to /usr, which may be mounted across a network). Most of the time /usr/local is used for installing software/data that are not part of the standard operating system distribution (in such case, /usr would only contain software/data that are part of the standard operating system distribution). It is possible that the FHS standard may in the future be changed to reflect this de facto convention.
I have no idea about Windows.
Can someone explain me the purpose of a $(DESTDIR) variable in a build system?
I mean that I know that it points a temporary directory for currently installed package, but I can not imagine what is practical use of it.
To clarify, well, I know what --prefix option is, for instance if we point the buldsystem like ./configure --prefix="/usr" all the package's files will belong /usr, like /usr/lib, /usr/share and so on, but in Makefiles I've also seen the following constructions:
$(DESTDIR)/$(prefix)
And what's id purpose of that? In short, is there a difference between DESTDIR and prefix and when should both be used?
Yes, there's a very important difference... in some environments.
The prefix is intended to be the location where the package will be installed (or appear to be installed) after everything is finalized. For example, if there are hardcoded paths in the package anywhere they would be based on the prefix path (of course, we all hope packages avoid hardcoded paths for many reasons).
DESTDIR allows people to actually install the content somewhere other than the actual prefix: the DESTDIR is prepended to all prefix values so that the install location has exactly the same directory structure/layout as the final location, but rooted somewhere other than /.
This can be useful for all sorts of reasons. One example are facilities like GNU stow which allow multiple instances to be installed at the same time and easily controlled. Other examples are creating package files using RPM or DEB: after the package is installed you want it unpacked at root, but in order to create the package you need it installed at some other location.
And other people use it for their own reasons: basically it all boils down to DESTDIR is used to create a "staging area" for the installation, without actually installing into the final location.
Etc.
Forgive me before I start, as I'm not a C / C++ etc programmer, a mere PHP one :) but I've been working on projects that use some others sourced from online open source repos, such as svn and git. For some of these projects, I need to install libraries and then run "./configure", "make" and then "make all" (as an example) and I do this on a "build" virtual machine to get the binaries that I need to use within my project.
The ultimate goal of some of my projects is to then take these "compiled" (if that's the correct term) binaries and place them onto a virtual machine which I would then re-distribute (according to licenses etc).
My question is this : when I build these binaries on my build machine, with all the pre-requisities that I need in order to build them in the first place ("build-essential" and "cmake" and "gcc" etc etc) - once the binaries are on my build machine (in /usr/lib for example) are they self-contained to the point that I can merely copy those /usr/lib binary files that the build created and place them in the same folder on the virtual machines that I would distribute, without the build servers having all the build components installed on them?
With all the dependencies that I would need to build the source in the place, would that finally built binary contain them all in itself, or would I have to include them on the distributed servers as well?
Would that work? Is the question a little too general and perhaps it would all depend on what I'm building?
Update from original posting after a couple of responses
I will be distributing the VMs myself, inasmuch as I will build them and then install my projects upon them. Therefore, I know the OS and environment completely. I just don't want to "bloat" them with unnecessary software that's been installed that I don't actually need because the compiled executables I will place on the distributed VMs in for example /usr/local/bin ...
That depends on how you link your program to libraries it depends on. In most cases, the default is to link dynamically, which means that you need to distribute your executable along its deps. You can check out what libraries are required to run the file using ldd command.
Theoretically, you can link everything statically, which means that library code would be compiled into executable. Thus, executable would really be self-contained, but linking statically is not always possible. This depends on actual libraries you are using and probably require playing with ./configure args when building them.
Finally, there are some liraries that always linked dynamically, such as libc. The good thing is that machine you are distributing to would surely have this library. The bad thing is that versions of these libraries may differ, and you might face ABI mismatch.
In short, if your project not huge and there is possibility to link everything statically, go this way. If not, read about AppImage and Docker.
The distribution of built libraries and headers (binary distribution) is a possible way and should work. (I do it in my projects always.)
It is not necessary that all of the libraries you built are installed into /usr/lib. To keep your target machine clean you can install it in other folder to, e.g.
/usr/local/MYLIB/lib/libmylib.so
/usr/local/MYLIB/include/mylib.h
/usr/local/MYOTHERLIB/lib/libmyotherlib.so
/usr/local/MYOTHERLIB/include/libmyotherlib.so
Advantages:
Easy installation, easy remove
All files within one subfolder, no files are missing, no mix with other libs
Disadvantage:
The loader must know the extra search path
I have an application that I've written for Windows which I am porting to Linux (Ubuntu to be specific). The problem is that I have always just used Linux, never really developed for it. More specifically, I dont understand the fundamental layout of the system. For example, where should I install my software? I want it to be accessible to all users, but I need write permission to the area to edit my data files. Furthermore, how can I determine in a programmatic way, where the software was installed (not simply where its being called from)? In windows, I use the registry to locate my configuration file which has all of the relevant information, but there is no registry in Linux. Thanks!
The Filesystem Hierarchy Standard (misnamed -- it is not a standard) will be very helpful to you; it clearly describes administrator preferences for where data should live.
Since you're first packaging your software, I'd like to recommend doing very little. Debian, Ubuntu, Red Hat, SuSE, Mandriva, Arch, Annvix, Openwall, PLD, etc., all have their own little idiosyncrasies about how software should be best packaged.
Building
Your best bet is to provide a source tarball that builds and hope users or packagers for those distributions pick it up and package it for you. Users will probably be fine with downloading a tarball, unpacking, compiling, and installing.
For building your software, make(1) is the usual standard. Other tools exists, but this one is available everywhere, and pretty reasonable. (Even if the syntax is cranky.) Users will expect to be able to run: make ; make install or ./configure ; make ; make install to build and install your software into /usr/local by default. (./configure is part of the autotools toolchain; especially nice for providing ./configure --prefix=/opt/foo to allow users to change where the software gets installed with one command line parameter. I'd try to avoid the autotools as far as you can, but at some point, it is easier to write portable software with them than without them.)
Packaging
If you really do want to provide one-stop-packaging, then the Debian Policy Manual will provide the canonical rules for how to package your software. The Debian New Maintainers Guide will provide a kinder, gentler, walkthrough of the tools unique to building packages for Debian and Debian-derived systems.
Ubuntu's Packaging Guide may have details specific to Ubuntu. (I haven't read it yet.)
Configuration
When it comes to your application's configuration file, typically a file is stored in /etc/<foo> where <foo> represents the program / package. See /etc/resolv.conf for details on name resolution, /etc/fstab for a list of devices that contain filesystems and where to mount them, /etc/sudoers for the sudo(8) configuration, /etc/apt/ for the apt(8) package management system, etc.
Sometimes applications also provide per-user configuration; those config files are often stored in ~/.foorc or ~/.foo/, in case an entire directory is more useful than a file. (See ~/.vim/, ~/.mozilla/, ~/.profile, etc.)
If you also wanted to provide a -c <filename> command line option to tell your program to use a non-standard configuration file, that sometimes comes in real handy. (Especially if your users can run foo -c /dev/null to start up with completely default configuration.)
Data files
Users will store their data in their home directory. You don't need to do anything about this; just be sure to start your directory navigation boxes with getenv("HOME") or load your configuration files via sprintf(config_dir, "%s/%s/config", getenv("HOME"), ".application"); or something similar. (They won't have permissions to write anywhere but their home directory and /tmp/ at most sites.)
Sometimes all the data can be stored in a hidden file or directory; ssh(1) for example, keeps all its data in ~/.ssh/. Typically, users want the default kry name from ssh-keygen(1) so ssh-agent(1) can find the key with the minimum of fuss. (It uses ~/.ssh/id_rsa by default.) The shotwell(1) photo manager provides a managed experience, similar to iPhoto.app from Apple. It lets users choose a starting directory, but otherwise organizes files and directories within as it sees fit.
If your application is a general purpose program, you'll probably let your users select their own filenames. If they want to store data directly to a memory stick mounted in /dev or /media or a remote filesystem mounted into /automount/blah, their home directories, a /srv/ directory for content served on the machine, or /tmp/, let them. It's up to users to pick reasonable filenames and directories for their data. It is up to users to have proper permissions already. (Don't try to provide mechanisms for users to write in locations they don't have privileges.)
Application file installation and ownership
There are two common ways to install an application on a Linux system:
The administrator installs it once, for everyone. This is usual. The programs are owned by root or bin or adm or some similar account. The programs run as whichever user executes them, so they get the user's privileges for creating and reading files. If they are packaged with distribution packaging files, executables will typically live in /usr/bin/, libraries in /usr/lib/, and non-object-files (images, schemas, etc.) will live in /usr/share/. (/bin/ and /lib/ are for applications needed at early boot or for rescue environments. /usr might be common to all machines in a network, mounted read-only late in the boot up process.) (See the FHS for full details.)
If the programs are unpackaged, then /usr/local/ will be the starting point: /usr/local/bin/, /usr/local/lib/, /usr/local/share/, etc. Some administrators prefer /opt/.
Users install applications into their home directory. This is less common, but many users will have a ~/bin/ directory where they store shell scripts or programs they write, or link in programs from a ~/Local/<foo>/ directory. (There is nothing magic about that name. It was just the first thing I thought of years ago. Others choose other names.) This is where ./configure --prefix=~/Local/blah pays for itself.)
In Linux, everything is text i.e. ASCII.
Configuration is stored in configuration files which normally have .conf extension and stored in /etc folder.
The executable of your application normally resides in /usr/bin folder. The data files of your application can go to /usr/lib or folder in /usr/ folder.
It is important to consider which language you are writing your application in. In C/C++ a custom makefile is used to do installation which copies these files in respective folders. The location of installation can be tracked by tracking the .conf file and storing the location while generation using bash script.
You should really know bash scripting in order to automate this everything.