Turn thin archive into normal one - static-libraries

I'm building V8, and by default it builds as a "thin" archive, where the .a files essentially just contain pointers to the object files on your filesystem instead of containing the object files themselves. See man ar for details.
I want to be able to put this library in a central place so that other people can link to it, and it would be obviously much easier to provide a normal archive file instead of providing a gaggle of object files as well.
How do I take the thin archives produced by the build and turn them into normal ones? I assume it would be as simple as enumerating the object files in the thin archive and rebuilding an archive using those, but I don't know what command can be used to list the archive's object files.

After some additional research, ar -t can be used to enumerate the object files in an archive, so after that it's just a matter of providing that list to ar as you usually would when creating an archive.
The following script handled this for all of the libraries at once:
for lib in `find -name '*.a'`;
do ar -t $lib | xargs ar rvs $lib.new && mv -v $lib.new $lib;
done

Related

change prerequisites in makefile at runtime

I am relatively new to make and don't know how to do one specific thing:
The overall process should look something like this:
the source files are java sources in a directory like src/org/path/to/packages/*.java
I want only to translate a specific java file, but the translation process will automatically translate all dependencies (I say 'translate' because I use j2objc to translate the java files to obj-c files - but that should be of no concern for this question)
The translated files will be put into the build/ directory with a folder structure reflecting the source folder structure (so build/org/path/to/packages/.m+.h)
These *.m and *.h files will then be compiled with j2objcc (a clang wrapper) into *.o files -> this step has to be done per file so every file is compiled with a command like j2objcc -c build/org/path/to/packages/file1.m -o build/org/path/to/package/file1.o
these shall be combined into a static library using ar
My problem is that I know which (one) java file I am starting with, but after step 2 I don't know which *.m and *.h files are generated/translated into the build directory. I'd like to read the contents of the build dir after step 2 with a command like find ./build -name '*.m' at make runtime but I don't know how to use this as a prerequisite in the make target.

Install data directory tree with massive number of files using automake

I have a data directory which I would like automake to generate install and uninstall targets for. Essentially, I just want to copy this directory verbatim to the DATA directory, Normally, I might list all the files individually, like
dist_whatever_DATA=dir/subdir/filea ...
But the problem arises when my directory structure looks like this
*root
*subdir
*~10 files
*subdir
*~10 files
*subdir
*~700 files
*subdir
...
~20 subdirs
I just cannot list all 1000+ files included as part of my Makefile.am. That would be ridiculous.
I need to preserve the directory structure as well. I should note that this data is not generated at all by the build process, and is actually largely short audio recordings. So it's not like I would want automake to "check" that every file I want to install has actually been created, as they're either there or not, and whatever file is there, I know I want it to be installed, and whatever file is not, should not be installed. I know that this is the justification used in other places to not do wildcard instsalls, but all the possible reasons don't apply here.
I would use a script to generate a Makefile fragment that lists all the files:
echo 'subdir_files =' > subfiles.mk
find subdir -type f -print | sed 's/^/ /;$q;s/$/ \\/' >> subfiles.mk
and then include this subfiles.mk from your main Makefile.am:
include $(srcdir)/subfiles.mk
nobase_dist_pkgdata_DATA = $(subdir_files)
A second option is to EXTRA_DIST = subdir, and then to write custom install-data-local and uninstall-local rules.
The problem here is that EXTRA_DIST = subdir will distributes all files in subdir/, including backup files, configuration files (e.g. from your VCS), and other things you would not want to distribute.
Using a script as above let you filter the files you really want to distribute.
I've found that installing hundreds of files separately makes for a tormentingly long invocation of make install. I had a similar case where I wanted to install hundreds of files, preserving the directory structure, and I did not want to change my Makefile.am every time a file was added to or removed from the collection.
I included a LZMA archive of the files in my distribution, and made automake rules like so:
GIANTARCHIVE = My_big_archive.tar.lz
dist_pkgdata_DATA = $(GIANTARCHIVE)
install-data-hook:
cd $(DESTDIR)$(pkgdatadir); \
cat $(GIANTARCHIVE) | unlzma | $(TAR) --list > uninstall_manifest.txt; \
cat $(GIANTARCHIVE) | unlzma | $(TAR) --no-same-owner --extract; \
rm --force $(GIANTARCHIVE); \
cat uninstall_manifest.txt | sed --expression='s/^\|$$/"/g' | xargs chmod a=rX,u+w
uninstall-local:
cd $(DESTDIR)$(pkgdatadir); \
cat uninstall_manifest.txt | sed --expression='s/ /\\ /g' | xargs rm --force; \
rm --force uninstall_manifest.txt
This way, automake installs My_big_archive.tar.lz in the $(pkgdata) directory, and extracts it there, making a list of all the files that were in it, so it can uninstall them later. This also runs much faster than listing each file as an install target, even if you were to autogenerate that list.
I would write a script (either as a separate shell script, or in the Makefile.am), that is run as part of the install-data-hook target.

Relationship between sources and binaries in Makefile

I have source code tree that contains about 300 Makefiles. These Makefiles compile sources to object files and object files to several firmware images.
What I want is to get a listing like this:
<firmware image name> : <list of object files> : <list of source files>
Is there any tool for that?
Well, if what you are shooting for is getting a list of images that will be built, you could try playing around with the -n and -W flags.
-n tells make to do a dry-run. It will print out all the commands it would have executed, but won't really execute them. If you do this after a make clean it might give you the information you want. Perhaps not in exactly the form you want, but that's what sed and awk are for.
-W file tells make to pretend that file was modified, and do its thing. This might be helpful if a make clean would be excessive.
Not by magic. Make doesn't know what are sources and what are objects. It just knows that there are all targets with various dependency relationships between them. You could try post-processing -p.

With gcov, is it possible to merge to .gcda files?

I have the same source files (C and Obj-C) being compiled into two targets: the unit test executable and the actual product (which then gets integration tested). The two targets build into different places, so the object files, .gcno and .gcda files are separate. Not all source files are compiled into the unit test, so not all objects will exist there. All source files are compiled into the product build.
Is there a way to combine the two sets of .gcda files to get the total coverage for unit tests and integration tests (as they are run on the product build)?
I'm using lcov.
Mac OS X 10.6, GCC 4.0
Thanks!
Finally I managed to solve my problem by means of lcov.
Basically what I did is the following:
Compile the application with the flags -fprofile-arcs -ftest-coverage --coverage
Distribute the copy of the application to each node.
Execute the application in each node in parallel. (This step generates into the application directory in the access host the coverage information)
Let lcov make his work:
lcov --directory src/ --capture --output-file coverage_reports/app.info
Generate the html output:
genhtml -o coverage_reports/ coverage_reports/app.info
I hope this can be of help to someone.
Since you're using lcov, you should be able to convert the gcov .gcda files into lcov files and merge them with lcov --add-tracefile.
From manpage: Add contents of tracefile.
Specify several tracefiles using the -a switch to combine the coverage data contained in these files by adding up execution counts for matching test and filename combinations.
See UPDATE below.
I think the intended way to do this is not to combine the .gcda files directly but to create independent coverage data files using
lcov -o unittests.coverage -c -d unittests
lcov -o integrationtests.coverage -c -d integrationtests
Each coverage data then represents one "run". You can of course create separate graphs or html views. But you can also combine the data using --add-tracefile, -a for short
lcov -o total.coverage -a unittests.coverage -a integrationtests.coverage
From the total.coverage you can generate the total report, using genhtml for example.
UPDATE: I found that it is actually possible to merge .gcda files directly using gcov-tool, which unfortunately are not easily available on the Mac, so this update doesn't answer the original question.
But with gcov-tool you can even incrementally merge many set together into one:
gcov-tool merge dir1 dir -o dir
gcov-tool merge dir2 dir -o dir
gcov-tool merge dir3 dir -o dir
Although that is not documented and might be risky to rely on.
This is really fast and avoids the round-about way over lcov, which is much slower when merging many sets. Merging some 80 sets of 70 files takes under .5 second on my machine. And you can still do an lcov on the aggregated set, which also is very much faster, should you need it. I use Emacs cov-mode which uses the .gcov files directly.
See this answer for details.
I merge it by lcov multi -d parameters as below. It works.
lcov -c -d ./tmp/ -d ./tmp1/ -o ./tmp/coverage.info
A simpler alternative would be to compile shared C/ObjC files once (generating .o files or better yet, single .a static library) and later linked into each test. In that case gcov will automatically merge results into single .gcno/.gcda pair (beware that tests have to be run serially, to avoid races when accessing gcov files).

How do I use dependencies in a makefile without calling a target?

I'm using makefiles to convert an internal file format to an XML file which is sent to other colleagues. They would make changes to the XML file and send it back to us (Don't ask, this needs to be this way ;)). I'd like to use my makefile to update the internal files when this XML changes.
So I have these rules:
%.internal: $(DATAFILES)
# Read changes from XML if any
# Create internal representation here
%.xml: %.internal
# Convert to XML here
Now the XML could change because of the workflow described above. But since no data files have changed, make would tell me that file.internal is up-to-date. I would like to avoid making %.internal target phony and a circular dependency on %.xml obviously doesn't work.
Any other way I could force make to check for changes in the XML file and re-build %.internal?
You want to allow two different actions: making the xml file from the internal file, and making the internal file from the xml file. Since Make knows only the modification times, it knows which target is older but not whether it should be remade. So put in another file as a flag to record when either action was last taken, and make that your primary target; if either target is newer than the flag, it has been modified by something other than these actions, and make should rebuild the older target (and then touch the flag).
There are several ways to implement this. In some versions of Make (such as recent versions of GNUMake) you can write double-colon rules, so that Make will rebuild a target differently, based on which preq triggered it:
%.flag:: %.internal
# convert $*.internal to $*.xml
touch $#
%.flag:: %.xml
# rewrite $*.internal based on $*.xml
touch $#
A less elegant but more portable way is to look at $? and rebuild the other file:
%.flag: %.xml %.internal
ifeq ($?,$*.internal)
# convert $*.internal to $*.xml
else
# rewrite $*.internal based on $*.xml
endif
touch $#
I think you could do something like this:
all: .last-converted-xml .last-converted-internal
.last-converted-internal: *.internal
./internal-2-xml $?
touch $# .last-converted-xml
.last-converted-xml: *.xml
./xml-2-internal $?
touch $# .last-converted-internal
This runs "xml-convert" on any .xml files newer than an arbitrary marker file, ".last-converted". The $? should give you a list of all dependencies (*.xml) that are newer than the marker file.
Of course, the xml-convert program will have to be written to take a list of xml files and process each one.
I'm not sure from the question whether you actually need the .internal file, or if that was just an attempt to get the makefile working. So, either your "xml-convert" program can convert each .xml file in place, or it can also generate file.internal as well if you need it.
Use the -W option of make to have make think one of the data files has changed:
make -W somedatafile
This will cause make to think somedatafile has been modified without actually changing it's modification time.
Would it be possible to use different names for the XML file? The file you create from the internal format would have one name and the file your colleagues send you another? If they used different names there would be no circular dependency.

Resources