I have tried to start distcc in pump mode, but due to unknown reason it is not able to distributed the pre-processing task. So I have uninstalled every thing related to distcc and want to redo everything from beginning to get distcc in pump mode up and running.
So tell me:
what are the packages need to be installed?
what are the environment variables need to be set in order to start the distcc in pump mode?
OS?
I got it up and running on Debian/jessie64 and it was hard work :(
PUMP didn't work with the provided packages (python-version mismatch or so) so I decided to compile it from source.
install dependencies:
sudo apt-get install gcc make python python-dev libiberty-dev
tried a few timesand I got erros about some unused parameters..
./autogen.sh
./configure
edit Makefile and comment
WERROR_CFLAGS = -Werror
make
sudo make install
on each client (where you want to start compilation from)
edit ~/.distcc/hosts
localhost,cpp,lzo anotherhost,cpp,lzo
cpp enables pump mode, which requires
lzo compression
on every Server (compile slave) -- machines can be both!
distccd --deamon --listen IPOFMACHINE --allow IP_OR_NET
i had problems when using the Debian packages when listen did not specify the IP address of the machine...
My results with using a DualCore Slave and a DualCore Master:
make -j8 CC=distcc
16 seconds
pump make -j8 CC=distcc
14 seconds
without distcc
20 seconds
So not that much...
But it sums up if you have a full-time dev-team!
e.g. XsecSaved/compile * Ycompiles a day * 20days/month
even for small values: X=2 and Y=30 => 20 minutes/developer/month, enough time to invest a little bit in distcc or ccache.
if you are trying to use the supplied packages, the configuration for the service can be found in /etc/default/distcc
I'm using mac as master and debian as salve.
distcc version distcc-3.2rc1, and make sure you have the same version on both master and salve.
Use these argument to build distcc
./autogen.sh
./configure --disable-Werror
make -s 2>Logs
make install
Plain mode
is successful without questions. Except I mirrored the absolute path of mine source code in order to distribute compilation, which is kind of a dirty works.
Pump mode
may be issue with include_server. Some of the option flag will cause include server failed to analyze. In that situation you won't be able to pump any header file to the include server, therefore, salves cannot recursively include them. You have to add them some of the option flags in include_server/parse_commands.py in order to set up include server on pump.
Probably showing some of your log in /var/log/daemon.log or /var/log/distccd.log will be better to help.
if you did not have log file on these directories, edit your /etc/init.d/distcc
DAEMON_ARGS="--pid-file=/var/run/$NAME.pid --log-level=info --log-file=/var/log/$NAME.log -- verbose --daemon"
Related
I've got a distccd daemon running on two servers.
One (call it A, .12) serves as the master, while the other (call it B, .11) serves as a slave:
Settings on A:
vim ~/.distcc/hosts
# contents of ~/.distcc/hosts
localhost
192.168.1.11,cpp,lzo
Settings on B:
distccd --daemon --allow 192.168.1.12 --log-file /home/nhlee/distcc.log
"ps aux | grep distcc" to check that it's running
Then I build something with:
pump make -j xxx
And it tells me that:
__________Using distcc-pump from /usr/bin
__________Using 2 distcc servers, of which only 1 support(s) pump mode
...
__________Shutting down distcc-pump include server
However, the time spent is nearly the same. I'm not sure if there is way to check which components were compiled by which host.
I turned on the monitor with:
distccmon-text 1
I tried this on both machines, and both show me empty lines only.
I looked in /var/look/messages, but there is nothing related to distcc.
I checked in the log file, which is also empty.
How can I see how my files are being
compiled?
#
So I checked with top on both machines, and it turns out that all files were compiled on the master's local (A). I'm not sure why there isn't any error, though.
I also tried removing 'localhost' from ~/.distcc/hosts, but the results are still the same.
OK, so I tried a few things and solved the problem. And I ran into some new issues that I'd like to share as well.
First, I did
export CC=/usr/bin/distcc
export CXX=/usr/bin/distcc
to let CMake know that I wanted to use distcc instead of gcc/g++.
This was the main problem. After I did this things were showing up in the monitor.
I had two versions of gcc/g++ installed on my machine, an older version under /usr/bin/ that did not support C++11, and a newer one that did.
Though my LD_LIBRARY_PATH had the new one's path in the front, somehow distcc wasn't finding the older one first. Thus I ran into some compile errors saying that -std=c++11 wasn't recognized.
## The following solved this issue:
sudo yum remove /usr/bin/g++
sudo yum remove /usr/bin/gcc
There were linking errors when I used distcc, but not when I used g++ directly:
# Add a simple one-line script (mine was called /usr/bin/distg++)
distcc g++ "$#"
Then add "-DCMAKE_CXX_COMPILER=distg++" to your CMake command:
cmake ... -DCMAKE_CXX_COMPILER=distg++
First, a little bit of background as to why I'm asking this question: Our product's daily build script (as run under Debian Linux by Jenkins), does roughly this:
Creates and enters a build environment using debootstrap and chroot
Checks out our codebase (including some 3rd party libraries) from SVN
Runs configure and make as necessary to build all of the code
Packages up the results into an install file that can be uploaded to our headless server boxes using our install tool.
This mostly works fine, but every so often (maybe one daily build out of 10), the part of the script that builds one of our third-party libraries will error out with an error like this one:
CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/bash
/root/software/3rdparty/libogg/missing autoconf
/root/software/3rdparty/libogg/missing: line 81: autoconf: command not found
WARNING: 'autoconf' is missing on your system.
You should only need it if you modified 'configure.ac',
or m4 files included by it.
The 'autoconf' program is part of the GNU Autoconf package:
<http://www.gnu.org/software/autoconf/>
It also requires GNU m4 and Perl in order to run:
<http://www.gnu.org/software/m4/>
<http://www.perl.org/>
make: *** [configure] Error 127
As far as I can tell, this happens occasionally because the timestamps of the files in the third-party library are different (e.g. off by a second or two from each other just due to the timing of when they were checked out from the SVN server during that particular build). That causes the configure script to think that it needs to auto-regenerate a file, so then it tries to call 'automake' to do so, and errors out because automake is not installed.
Of course the obvious thing to do here would be to install automake in the build environment, but the build environment is not one that I can easily modify (due to institutional reasons), so I'd like to avoid having to do that if possible. What I'd like to do instead is figure out how to get the configure scripts (which I can modify) to ignore the timestamps and just always do the basic build that they do when the timestamps are equal.
I tried to finesse the problem by manually running 'touch' on some files to force their timestamps to be the same, and that seemed to make the problem occur less often, but it still happens:
./configure --prefix="$PREFIX" --disable-shared --enable-static && \
touch config* aclocal* Makefile* && \
make clean && make install ) || Failure "libogg"
Can anyone familiar with how automake works supply some advice on how I might make the "configure" calls in our daily build work more reliably, without modifying the build environment?
You could try forcing SVN to use commit times on checkout on your Jenkins server. These commit times can also be set in SVN if they don't work out for some reason. You could use touch -d or touch -r instead of just touch to avoid race conditions there.
I have been trying to build the freetype2 library in OSX Mavericks for several weeks now, but without success.
The trouble is with using GNU Autotools to create the configure build script.
I have installed automake, autoconf, libtoolize, m4 and perl5 using the macports port command.
When executing aclocal, there is supposed to be a file created in the configure directory that contains Autotools macros: aclocal.m4. However, this file is not being output, and the subsequent glibtoolize and autoconf commands are generating a spurious configure script.
The result is: no aclocal.m4 file, and the usual contents of ./autom4te.cache/traces.* being dumped at the top of the generated configure file (the traces.* files are empty).
e.g.:
m4trace:configure.ac:14: -1- AC_SUBST([SHELL])
m4trace:configure.ac:14: -1- AC_SUBST_TRACE([SHELL])
m4trace:configure.ac:14: -1- m4_pattern_allow([^SHELL$])
Any help would be greatly appreciated.
GNU Autotools does not support execution over a working directory stored on a FAT32 file system. It results in spurious m4trace debug messages being output to the generated configure script.
It is unknown why this is, but may be related to the reliance on the sleep command to check whether a file has changed. FAT32 rounds time stamps to the nearest second, where execution and subsequent modification checks may happen on a sub-second timescale.
This has been raised with the development team, but for now, I move my working directory to my OSX boot partition before executing GNU Autotools.
I want to compile gnuradio on Raspberry Pi with a fresh copy of Raspbian wheezy. I have a setup of distcc with an i7 to offload the work from RPi.
It works well with a simple test file when I use
$gcc -c hello.c
I can see that the task is done in the log of the other computer.
BUT, when I want to build gnuradio and invoke the 'make' command, distcc doesn't even produce any output in the verbose mode.
Trying
$distcc make
produces this:
distcc[5464] (dcc_scan_args) compiler apparently called not for compile
and continues building on the localhost.
Is there a way around this ?
Do you have $DISTCC_HOSTS set in the shell you're calling make from? Have you specified -j for multiple jobs? What is the result of which gcc and echo $CC?
If you follow the directions here you can see that gcc, cc, etc. are symlinked into /usr/local/bin as references to /usr/bin/distcc, which he then added to the beginning of his path so that make would find it first.
It can also be helpful to export DISTCC_VERBOSE=1 to provide more output. There's more thorough documentation on this rPi stackexchange answer.
Distcc can't redistribute all of the work done my make onto other machines. Some of it, such as linking, has to be made locally. Hence the "called not for compile" messages.
I need to compile an old kernel 2.6.23 (downloaded from linuxkernels.com) in order to use it with a real time patch (it's a long story...), I installed ubuntu 10.04 which has a kernel 2.6.32-43-generic-pae.
I decided to simple copy ubuntu configuration:
cp cp /boot/config-2.6.32-43-generic-pae /usr/src/linux-2.6.23/.config
I recompile the kernel:
make menuconfig
make
make install
make modules_install
mkinitramfs -o /boot/initrd.img-2.6.23-MYVER 2.6.23-MYVER
note that in the config I make this edit: I remove the module versioning support under loadable modules section. (this step is required by the patch).
At start, I get the title error:
cpufreq: no nforce2 chipset error
how it possible, since I copied a working configuration? Maybe is because of that only flag I disable?
Not sure, but when I copy an existing .config, I run "make oldconfig" first to make sure I'm all sync'd up. Then I run make menuconfig if I want to interactively review/change any settings.