Does 'make'-ing something from source make it self-contained? - makefile

Forgive me before I start, as I'm not a C / C++ etc programmer, a mere PHP one :) but I've been working on projects that use some others sourced from online open source repos, such as svn and git. For some of these projects, I need to install libraries and then run "./configure", "make" and then "make all" (as an example) and I do this on a "build" virtual machine to get the binaries that I need to use within my project.
The ultimate goal of some of my projects is to then take these "compiled" (if that's the correct term) binaries and place them onto a virtual machine which I would then re-distribute (according to licenses etc).
My question is this : when I build these binaries on my build machine, with all the pre-requisities that I need in order to build them in the first place ("build-essential" and "cmake" and "gcc" etc etc) - once the binaries are on my build machine (in /usr/lib for example) are they self-contained to the point that I can merely copy those /usr/lib binary files that the build created and place them in the same folder on the virtual machines that I would distribute, without the build servers having all the build components installed on them?
With all the dependencies that I would need to build the source in the place, would that finally built binary contain them all in itself, or would I have to include them on the distributed servers as well?
Would that work? Is the question a little too general and perhaps it would all depend on what I'm building?
Update from original posting after a couple of responses
I will be distributing the VMs myself, inasmuch as I will build them and then install my projects upon them. Therefore, I know the OS and environment completely. I just don't want to "bloat" them with unnecessary software that's been installed that I don't actually need because the compiled executables I will place on the distributed VMs in for example /usr/local/bin ...

That depends on how you link your program to libraries it depends on. In most cases, the default is to link dynamically, which means that you need to distribute your executable along its deps. You can check out what libraries are required to run the file using ldd command.
Theoretically, you can link everything statically, which means that library code would be compiled into executable. Thus, executable would really be self-contained, but linking statically is not always possible. This depends on actual libraries you are using and probably require playing with ./configure args when building them.
Finally, there are some liraries that always linked dynamically, such as libc. The good thing is that machine you are distributing to would surely have this library. The bad thing is that versions of these libraries may differ, and you might face ABI mismatch.
In short, if your project not huge and there is possibility to link everything statically, go this way. If not, read about AppImage and Docker.

The distribution of built libraries and headers (binary distribution) is a possible way and should work. (I do it in my projects always.)
It is not necessary that all of the libraries you built are installed into /usr/lib. To keep your target machine clean you can install it in other folder to, e.g.
/usr/local/MYLIB/lib/libmylib.so
/usr/local/MYLIB/include/mylib.h
/usr/local/MYOTHERLIB/lib/libmyotherlib.so
/usr/local/MYOTHERLIB/include/libmyotherlib.so
Advantages:
Easy installation, easy remove
All files within one subfolder, no files are missing, no mix with other libs
Disadvantage:
The loader must know the extra search path

Related

fastest to rebuild the linux kernel using rpmbuild

I just built the linux kernel for CentOS using the instructions that can be found here: https://wiki.centos.org/HowTos/Custom_Kernel
Now, I made my changes and I would like to rebuild the kernel and test it with my changes. How do I do that but:
1. Without having to recompile everything. So, build process should reuse whatever object files generated by the first build that wont need to be modified.
2. Without having to build the other packages that are build with the kernel (e.g., debuginfo, tools, debug-devel, ...etc.).
Thanks.
You cannot. The paradigm of rpmbuild is to always start from a clean slate to ensure reproducibility and predictability. The subpackages would be also be invalidated because they depend on the exact output of your kernel build, e.g. locations within the binary images where certain symbols are defined, that may have changed when you rebuilt it.

How to install .d files of a D library using automake?

What is the correct way to install (system-wide) a D library (on a GNU system, at least) using Makefile.am?
Here is my code to install the static and shared libraries:
install-data-local:
install librdf_dlang.a librdf_dlang.so $(libdir)
The remaining question is how to install .d files for developers to use my library?
Particularly, what should be the installation directory for .d files?
If you are doing a system-wide installation of D libraries and source (interface files I presume), then the most common places are /usr/include/<project name> or /usr/local/include/<project name> as long as it does not clash with some existing C/C++ project that stores header files there. Some D programmers prefer /usr/include/d/ or /usr/local/include/d/ as well...
I for an example use /usr/di (D imports) for this purpose and my library projects have all their interface files there. I will explain why I do not like to have separate project directories there.
No matter what directory you pick, you need to update your compiler search paths.
Here is a part of my dmd.conf:
[Environment64]
DFLAGS=-I/usr/include/dmd/phobos -I/usr/include/dmd/druntime/import -I/usr/di -L-L/usr/lib64 -L--export-dynamic -fPIC
, and ldc2.conf looks like:
// default switches appended after all explicit command-line switches
post-switches = [
"-I/usr/include/d/ldc",
"-I/usr/include/d",
"-I/usr/di",
"-L-L/usr/lib64",
];
If you prefer to have a separate directory for every project, you would end up with -I<path> for each of them. - I really do not like this approach. However, it is very popular among developers so it is really up to you how to organise the D import files. I know how much developers dislike the Java approach with domain.product.packages, but this nicely fits into a single place where all D interface files are and most importantly there are no clashes because of the domain/product part...
According to Filesystem Hierarchy Standard (and e.g. this SO question)
/usr/local/include
looks a strong candidate on a "linux/unix-like system". See especially note 9:
Historically and strictly according to the standard, /usr/local is for data that must be stored on the local host (as opposed to /usr, which may be mounted across a network). Most of the time /usr/local is used for installing software/data that are not part of the standard operating system distribution (in such case, /usr would only contain software/data that are part of the standard operating system distribution). It is possible that the FHS standard may in the future be changed to reflect this de facto convention.
I have no idea about Windows.

Effective way of distributing go executable

I have a go app which relies heavily on static resources like images and jars. I want to install that go executable in different platforms like linux, mac and windows.
I first thought of bundling the resources using https://github.com/jteeuwen/go-bindata, but since the files(~100) have size ~ 20MB or so, it takes a really long time to build the executable. I thought having a single executable is an easy way for people to download the executable and run it. But seems like that is not an effective way.
I then thought of writing a installation package for each of the platform like creating a .rpm or .deb packages? So these packages contain all the resources and puts it into some platform specific pre defined locations and the go executable can reference them. But the only thing is that I have to handle that in the go code. I have to see if it is windows then load the files from say c:\go-installs or if it is linux then load the files from say /usr/local/share/go-installs. I want the go code to be as platform agnostic as it can be.
Or is there some other strategy for this?
Thanks
Possibly does not qualify as real answer but still…
As to your point №2, one way to handle this is to exploit Go's way to do conditional compilation: you might create a set of files like res_linux.go, res_windows.go etc and put a set of the same variables in each, pointing to different locations, like
var InstallsPath = `C:\go-installs`
in res_windows.go and
var InstallsPath = `/usr/share/myapp`
in res_linux.go and so on. Then in the rest of the program just reference the res.InstallsPath variable and use the path/filepath package to construct full pathnames of actual resources.
Of course, another way to go is to do a runtime switch on runtime.GOOS variable—possibly in an init() function in one of the source files.
Pack everything in a zip archive and read your resource files from it using archive/zip. This way you'll have to distribute just two files—almost "xcopy deployment".
Note that while on Windows you could just have your executable extract the directory from the pathname of itself (os.Args[0]) and assume the resource file is located in the same directory, on POSIX platforms (GNU/Linux and *BSD etc) the resource file should still be located under /usr/share/myapp or a similar place dictated by FHS (or particular distro's rules), so some logic to locate that file will still be required.
All in all, if this is supposed to be a piece of FOSS, I'd go with the first variant to let the downstream packagers tweak the pathnames. If this is a proprietary (or just niche) software the second idea appears to be rather OK as you'll play the role of downstream packagers yourself.

Make, install, executing a program

I have been a CS student for a while and it seems like I (or many of my friends) never understood what's happening behind the scene when it terms to make, install etc.
Correct me but is make a way to compile a set of files?
what is it mean by "installing a program to a computer" like on windows because when I am coding in different languages such as java or perl, we dont install what we wrote. we would compile (if not, interpret language) and just run it. So, why are programs such as Skype needs to be "installed"?
Can anyone clarify this? I feel like this is something i need to know as a programmer.
Make is a build system
Make is a build system which is simply a way to script the steps needed to compile a program. Make specifically can be used with anything, but is usually used to compile C or C++ programs. It simplifies and creates a standard way for programmers to script the preparation of their program, so that it can be built and installed with ease
Why a build system
You see, if your program is a simple one source file program, then using make might be an overkill, as compiling the simplest c program is as simple as
gcc simpleprogram.c -o simpleprogram.out
However, as the size of the software grows, the complexity of it grows, and the complexity of how it needs to be built grows. For example, you may want to determine which version of each library is installed in the computer which you are compiling in, you may want to run some tests after compiling your program to determine it is working correctly, or you may want to automatically download some dependencies your program has.
Most software built need a mixture of these tasks eventually. So, instead of reinventing the wheel, they use a build system which allow scripting this. If you are familiar with Java (which you mentioned) a build system comparable to make, but used in the java world is Apache Ant.
Why install
Well, lets assume that you used the "make" command but not "make install". The "make" command is usually used to just to prepare the program for compilation, and the compile it. However, once your program is compiled, all you have is an executable in the directory in which you compiled the program in. The program, its documentation, and it's configuration files haven't been put in the appropriate directories needed for all users to use it. That's what "make install" is for. Make install takes all the files associated with the program you just compiled, and puts said files in the appropriate directories, so that it becomes available to everyone, and so that each component is in the expected directory according to your operating system.
make is a bit of software that reduces the amount of code that needs to be compiled - it compares modification times of the source code with the target. If the code has changed a compile is done to construct the target otherwise you can skip that step.
Installing software is placing the executables/configuration files into the right places - perhaps constructing some files along the way. E.g. usernames in your skype example

How should open source libraries be used on Windows?

There are many open-source libraries that can be compiled with Visual Studio. I'm porting a program from Linux to Windows, but it depends on a number of libraries. I don't know what the best practices regarding libraries are on Windows.
On Linux, these libraries are typically part of the distribution. To use sqlite on Debian, for example, you need only to install libsqlite3-dev and the include files and libraries (both static and dynamic) are automatically installed and available to your program.
If you need a different version than your distribution supplies, you can compile it in your home directory, install it to ~/include and ~/lib, and set the appropriate environment variables so that your compiler includes those directories in its search path.
What is the best way to use libraries that are distributed as source on Windows? If I link dynamically rather than statically, is there an easy way to copy required DLLs into the output directory to ease redistribution (assuming license requirements are met)?
Option 1 - Projects that have binary distributions for windows / do not build in DevStudio.
E.g. OpenSSL.
Projects like OpenSSL are best downloaded to their own folder and built using their own scripts. OpenSSL typically installs itself to C:\OpenSSL on windows builds, so one done, you can add C:\OpenSSL\include and C:\OpenSSL\lib to your project environment to access the OpenSSL headers and Libs. The actual dll files you will need to copy from C:\OpenSSL\bin into your projects staging folder (normally your SolutionDir\Debug or Release).
Once youve gone through the hassle of building OpenSSL once, you don't want to do it again. Or, if you've downloaded the binary distribution, its best left alone. Just document to others which binary distribution you used so they can set up their Visual Studio build environment appropriately.
Option 2 - Small libraries that are easy to create Visual Studio Projects for (or already have). Lua and sqllite fall into this category.
For projects that are small enough, it is not inconvenient to simply add them to your solution in a sub folder. This way you can get their outputs built directly to the solutions output folder, and you do not have to bundle pre-build binary files in your solution making it far easier to share the project with others.
Option 3 - As an alternative you could create your own standardized folder for the products of open source projects. Create C:\oss\include, c:\oss\lib, c:\oss\bin etc, add these paths to DevStudios lib and include paths, add c:\oss\bin to the systems PATH variable, as you build each OSS project, copy the appropriate files to these locations.
Again, while convenient, this setup makes it diffucult to replicate the build environment on a 2nd PC, so you might want to keep the entire C:\oss tree in source control as well.
On windows the easiest way is to build your own DLLs and include them in the program directory.
Yes it uses a bit more space, but HD are large these days and avoids a lot of headaches of incompatible versions (DLL hell). Windows also suffers a few more wrinkles with versions of libs built with different compilers so shipping your own builds is safest

Resources