Using dpkg-buildpackage to build multiple packages from same source - compilation

I have a source tree that can build two different projects from the same source. You call make A or make B, and the code is affected by ifdefs and similar using preprocessor variables to make two versions of the output. I'm looking to make dpkgs for these, and can make one fine, but am unsure of a good way to do this.
Currently I run dpkg-buildpackage, and I get A.deb or similar. Is there a way to do dpkg-buildpackage -target B so that it would then build a debian package for that project?
Things such as Creating multiple packages with dpkg-buildpackage seem to refer to having separate source code for the separate projects, which is not the case here.
I am in control of the source code so can make changes there.
Thanks.

You can set up one rules file to build two separate Debian packages at the same time. But if they are unrelated, this is an abuse of the Debian packaging procedure. It's designed for building multiple related Debian binary packages from a single source.

Related

fastest to rebuild the linux kernel using rpmbuild

I just built the linux kernel for CentOS using the instructions that can be found here: https://wiki.centos.org/HowTos/Custom_Kernel
Now, I made my changes and I would like to rebuild the kernel and test it with my changes. How do I do that but:
1. Without having to recompile everything. So, build process should reuse whatever object files generated by the first build that wont need to be modified.
2. Without having to build the other packages that are build with the kernel (e.g., debuginfo, tools, debug-devel, ...etc.).
Thanks.
You cannot. The paradigm of rpmbuild is to always start from a clean slate to ensure reproducibility and predictability. The subpackages would be also be invalidated because they depend on the exact output of your kernel build, e.g. locations within the binary images where certain symbols are defined, that may have changed when you rebuilt it.

Does 'make'-ing something from source make it self-contained?

Forgive me before I start, as I'm not a C / C++ etc programmer, a mere PHP one :) but I've been working on projects that use some others sourced from online open source repos, such as svn and git. For some of these projects, I need to install libraries and then run "./configure", "make" and then "make all" (as an example) and I do this on a "build" virtual machine to get the binaries that I need to use within my project.
The ultimate goal of some of my projects is to then take these "compiled" (if that's the correct term) binaries and place them onto a virtual machine which I would then re-distribute (according to licenses etc).
My question is this : when I build these binaries on my build machine, with all the pre-requisities that I need in order to build them in the first place ("build-essential" and "cmake" and "gcc" etc etc) - once the binaries are on my build machine (in /usr/lib for example) are they self-contained to the point that I can merely copy those /usr/lib binary files that the build created and place them in the same folder on the virtual machines that I would distribute, without the build servers having all the build components installed on them?
With all the dependencies that I would need to build the source in the place, would that finally built binary contain them all in itself, or would I have to include them on the distributed servers as well?
Would that work? Is the question a little too general and perhaps it would all depend on what I'm building?
Update from original posting after a couple of responses
I will be distributing the VMs myself, inasmuch as I will build them and then install my projects upon them. Therefore, I know the OS and environment completely. I just don't want to "bloat" them with unnecessary software that's been installed that I don't actually need because the compiled executables I will place on the distributed VMs in for example /usr/local/bin ...
That depends on how you link your program to libraries it depends on. In most cases, the default is to link dynamically, which means that you need to distribute your executable along its deps. You can check out what libraries are required to run the file using ldd command.
Theoretically, you can link everything statically, which means that library code would be compiled into executable. Thus, executable would really be self-contained, but linking statically is not always possible. This depends on actual libraries you are using and probably require playing with ./configure args when building them.
Finally, there are some liraries that always linked dynamically, such as libc. The good thing is that machine you are distributing to would surely have this library. The bad thing is that versions of these libraries may differ, and you might face ABI mismatch.
In short, if your project not huge and there is possibility to link everything statically, go this way. If not, read about AppImage and Docker.
The distribution of built libraries and headers (binary distribution) is a possible way and should work. (I do it in my projects always.)
It is not necessary that all of the libraries you built are installed into /usr/lib. To keep your target machine clean you can install it in other folder to, e.g.
/usr/local/MYLIB/lib/libmylib.so
/usr/local/MYLIB/include/mylib.h
/usr/local/MYOTHERLIB/lib/libmyotherlib.so
/usr/local/MYOTHERLIB/include/libmyotherlib.so
Advantages:
Easy installation, easy remove
All files within one subfolder, no files are missing, no mix with other libs
Disadvantage:
The loader must know the extra search path

How to add file to all targets in Xcode when you have many targets?

I have a project with over 50 targets and growing and it is becoming cumbersome to add files to all targets because it takes a long time to select each target.
I am aware of multiple methods to add a file to multiple targets but they all involve checking a box for each target. (for those looking for that: How to add .plist file to all targets in XCode?)
What I'm looking for is an alternative method or script that can be used to add a file or set of files to all targets in the project without selecting them one by one. Anyone know of a trick?
Here is a link to GitHub repo. In the example project I have three targets, added file only in first target. When you run ruby ./addfile.rb from project's directory, the script will add img.jpg resource into two other targets (projex2, projex3). You need to install xcodeproj ruby gem before running the script. You should run sudo gem install xcodeproj in terminal.
What I'm looking for is an alternative method or script that can be used to add a file or set of files to all targets in the project without selecting them one by one. Anyone know of a trick?
If you're often adding the same source to all your targets, then one way to avoid that would be to create a framework the contains all the files your various targets have in common, and then have each of those targets link in the framework. That way, you only add common files to the one framework, and all the targets get access to them via the framework.

CMake - Build custom build paths for different configurations

I have a project whose make process generates different build artifacts for each configuration, e.g. if initiated with make a=a0 b=b0 it would build object files into builds/a0.b0, generate a binary myproject.a0.b0, and finally update an ambiguous executable link to point to the most recently built project ln -s myproject.a0.b0 myproject. For this project, this is a useful feature mainly because:
It separates the object files for different configurations (so when I rebuild in another configuration I don't have to recompile every single source with new defines and configurations set, (unfortunately) a very common procedure).
It retains the binaries for each configuration (so it's not required to rebuild to use a different configuration if it has already been built).
It makes a copy (or link) to the last built binary which is useful in testing.
Right now this is implemented in an ugly decades-old non-portable Makefile. I'd like to reproduce the same behavior in CMake to allow easier building on other platforms but I have not been able to reproduce it in any reasonable manner. It seems like adding targets with add_library/executable, but doing this for each possible permutation of input parameters seems wrong. And I'm not sure I could get the utilization correct, allowing make, make a=a0, make b=b0 a=a0 as opposed to what specifying a cmake target would require: make myproject-a0.b0.
Is this possible to do in cmake? Namely:
Specify the build directory based on input parameters.
Accept the parameters as make arguments which can be left out (defaulted) to select the appropriate target at the level of the makefile (so it's not required to rerun cmake for a new configuration).

How to create packages for different configurations of the same product in PackageScript?

we call Mac PackageMaker from an Ant script, to build our product installation package.
I would like to pass a parameter 'productConfiguration', that will direct the package to include or exclude certain components, e.g. in order to create a smaller Trial version package.
What is the correct way to achieve that?
Notes:
In Windows we use InstallShield's Features, Conditions, Release Flags, Configuration Flags.
Are there similar concepts in PackageMaker?)
Where is the documentation of pkmkdoc spec 1.12?
The only way I can think of, is to generate [some of] the xml files inside the install.pmdoc smart folder, using templates. But it looks very inelegant to me.
Packagemaker doesn't contain a lot of sophisticated features for things like this. I would suggest tweaking the ant script by creating a separate build target that builds the trial installer. This target can customize both the files included and the PackageMaker parameters.

Resources