Considering a distributed compiler such as incredibuild or distcc, suppose I send a compile job to a remote machine to compile a C++ source file that depends on a static or dynamic library (something that I would need to install in order to build my program). Does the remote machine need that library in order to compile it?
From my understanding of C and C++, when a source file is compiled into an object file, the compiler "stubs" out anything external to the source code (such as a call to a function that is not in the code, i.e. only defined as part of a header). When the linking occurs, that's when the dependencies need to be present so the linker can inspect them figure out where a function is implemented. If that is the case, does this mean, distributed compilers can only do compile+link if they have the dependencies installed? Does the same hold true for both dynamic and static libraries?
The build order is indeed the heart of any successful compilation process :) Distributed builds must obtain the same build order while performing the build in a much more parallel manner.
When you perform a build with IncrediBUild, the machine that starts the build, is managing the build process and making sure that each distributed compilation task will be built in the correct build order no matter which machine actually performs the build. When a compilation task is sent to a remote machine, it is sent along with the base environment settings of the machine that started the build. We call it "Process Virtualization".
If during the compilation of that remote process, a certain file is needed (like in a scenario you have described) IncrediBuild can fetch it from the machine that started the build and place it in the cache of the remote machine. This file will be used for this compilation task and will be kept in the cache of the remote machine for future use.
Because of this mechanism, the remote machines doesnt need to have any source files at all. They can basically be regular Windows machines - without any development environment.
Hope I was able to shed some light on this issue.
Related
I just built the linux kernel for CentOS using the instructions that can be found here: https://wiki.centos.org/HowTos/Custom_Kernel
Now, I made my changes and I would like to rebuild the kernel and test it with my changes. How do I do that but:
1. Without having to recompile everything. So, build process should reuse whatever object files generated by the first build that wont need to be modified.
2. Without having to build the other packages that are build with the kernel (e.g., debuginfo, tools, debug-devel, ...etc.).
Thanks.
You cannot. The paradigm of rpmbuild is to always start from a clean slate to ensure reproducibility and predictability. The subpackages would be also be invalidated because they depend on the exact output of your kernel build, e.g. locations within the binary images where certain symbols are defined, that may have changed when you rebuilt it.
Forgive me before I start, as I'm not a C / C++ etc programmer, a mere PHP one :) but I've been working on projects that use some others sourced from online open source repos, such as svn and git. For some of these projects, I need to install libraries and then run "./configure", "make" and then "make all" (as an example) and I do this on a "build" virtual machine to get the binaries that I need to use within my project.
The ultimate goal of some of my projects is to then take these "compiled" (if that's the correct term) binaries and place them onto a virtual machine which I would then re-distribute (according to licenses etc).
My question is this : when I build these binaries on my build machine, with all the pre-requisities that I need in order to build them in the first place ("build-essential" and "cmake" and "gcc" etc etc) - once the binaries are on my build machine (in /usr/lib for example) are they self-contained to the point that I can merely copy those /usr/lib binary files that the build created and place them in the same folder on the virtual machines that I would distribute, without the build servers having all the build components installed on them?
With all the dependencies that I would need to build the source in the place, would that finally built binary contain them all in itself, or would I have to include them on the distributed servers as well?
Would that work? Is the question a little too general and perhaps it would all depend on what I'm building?
Update from original posting after a couple of responses
I will be distributing the VMs myself, inasmuch as I will build them and then install my projects upon them. Therefore, I know the OS and environment completely. I just don't want to "bloat" them with unnecessary software that's been installed that I don't actually need because the compiled executables I will place on the distributed VMs in for example /usr/local/bin ...
That depends on how you link your program to libraries it depends on. In most cases, the default is to link dynamically, which means that you need to distribute your executable along its deps. You can check out what libraries are required to run the file using ldd command.
Theoretically, you can link everything statically, which means that library code would be compiled into executable. Thus, executable would really be self-contained, but linking statically is not always possible. This depends on actual libraries you are using and probably require playing with ./configure args when building them.
Finally, there are some liraries that always linked dynamically, such as libc. The good thing is that machine you are distributing to would surely have this library. The bad thing is that versions of these libraries may differ, and you might face ABI mismatch.
In short, if your project not huge and there is possibility to link everything statically, go this way. If not, read about AppImage and Docker.
The distribution of built libraries and headers (binary distribution) is a possible way and should work. (I do it in my projects always.)
It is not necessary that all of the libraries you built are installed into /usr/lib. To keep your target machine clean you can install it in other folder to, e.g.
/usr/local/MYLIB/lib/libmylib.so
/usr/local/MYLIB/include/mylib.h
/usr/local/MYOTHERLIB/lib/libmyotherlib.so
/usr/local/MYOTHERLIB/include/libmyotherlib.so
Advantages:
Easy installation, easy remove
All files within one subfolder, no files are missing, no mix with other libs
Disadvantage:
The loader must know the extra search path
I am working in a project where we have just added parallelism to our build system, using GNU Make.
We build both libraries and the programs in parallel.
First we build all the libs necessary for the binaries. After the libs are created we start building the binaries.
Now when running our programs we have found that one of the binaries dont run as expected. Is it possible that GNU Make could cause broken binaries when building in parallel but still link correctly? If that is the case, what is the common cause and how can one avoid it?
Correct parallel builds depend on a correct makefile. If a build works serially but not in parallel, that means that your makefile has not declared all the prerequisites that it needs, so make doesn't realize it can't build target X until after target Y is complete.
However, it's extremely unlikely that these kinds of errors would allow the build to succeed: that is, the compiler or linker will almost always fail if things are building in the wrong order. It's hard for me to imagine how the build would succeed except by the purest chance, if at all (maybe if your tools overwrite an existing file instead of deleting it and writing it from scratch). Of course you given no information about exactly what "don't run as expected" means so it's hard to say for sure.
To investigate you need to do some testing: does it fail the same way every time you do a parallel build? Does it fail even if you use different amounts of parallelism (different -j levels)? Does it continue to fail if you switch back to non-parallel builds? Does the build succeed with -j even if you start with a completely clean workspace (nothing built)?
I've got a certain project that I build and distribute to users. I have two build configurations, Debug and Release. Debug, obviously, is for my use in debugging, but there's an additional wrinkle: the Debug configuration uses a special debugging memory manager, with a dependency on an external DLL.
There's been a few times when I've accidentally built and distributed an installer package with the Debug configuration, and it's then failed to run once installed because the users don't have the special DLL. I'd like to be able to keep that from happening in the future.
I know I can get the dependencies in a program by running Dependency Walker, but I'm looking for a way to do it programatically. Specifically, I have a way to run scripts while creating the installer, and I want something I can put in the installer script to check the program and see if it has a dependency on this DLL, and if so, cause the installer-creation process to fail with an error. I know how to create a simple CLI program that would take two filenames as parameters, and could run a DependsOn function and create output based on the result of it, but I don't know what to put in the DependsOn function. Does anyone know how I'd go about writing it?
You can read the PE imports table to find out what DLLs are required at load time. This is what Dependency Walker does, and also the dumpbin tool included with the Microsoft Platform SDK (which is installed by Visual Studio and also available as a separate download). Some of the debughelp APIs provide access to information from the PE header, but why not invoke the dumpbin tool and inspect its output? Since it's text-based non-interactive it should be pretty straightforward to integrate into your installer build process. Dependency Walker also has a capability to run in non-interactive mode with text output.
If you do need to retrieve the information without the help of any other tool, the ImageDirectoryEntryToDataEx function is a good place to start. Also, here's a question that shows how to do it manually (but do use ImageHlp instead, which knows about all the various variants of the PE format):
Printing out the names of implicitly linked dll's from .idata section in a portable executable
Is it hard to compile software so that it is a single .exe file? I have been publishing the program in the traditional manner and the resultant program consists of a setup file and a couple data files. Ideally I would like to have a lone exe that runs program without having to install.
In general, if you're using Visual Basic, you'll always need to, at a minimum, guarantee the target computer has the proper .NET Framework installed.
If that's the case, then you can just deploy your .exe from your Console or WIndows Application project, and it will work, provided you don't use any references or types outside of the standard framework types. If you have any assemblies you use, or require any extra data to exist, then an installer is the correct way to go.
In general, building an installer makes sure that all of the dependencies are in place, which is why it's the "traditional" manner of publishing. Without that, you (or somebody) has to verify the dependencies before running your program manually.