I have the same source files (C and Obj-C) being compiled into two targets: the unit test executable and the actual product (which then gets integration tested). The two targets build into different places, so the object files, .gcno and .gcda files are separate. Not all source files are compiled into the unit test, so not all objects will exist there. All source files are compiled into the product build.
Is there a way to combine the two sets of .gcda files to get the total coverage for unit tests and integration tests (as they are run on the product build)?
I'm using lcov.
Mac OS X 10.6, GCC 4.0
Thanks!
Finally I managed to solve my problem by means of lcov.
Basically what I did is the following:
Compile the application with the flags -fprofile-arcs -ftest-coverage --coverage
Distribute the copy of the application to each node.
Execute the application in each node in parallel. (This step generates into the application directory in the access host the coverage information)
Let lcov make his work:
lcov --directory src/ --capture --output-file coverage_reports/app.info
Generate the html output:
genhtml -o coverage_reports/ coverage_reports/app.info
I hope this can be of help to someone.
Since you're using lcov, you should be able to convert the gcov .gcda files into lcov files and merge them with lcov --add-tracefile.
From manpage: Add contents of tracefile.
Specify several tracefiles using the -a switch to combine the coverage data contained in these files by adding up execution counts for matching test and filename combinations.
See UPDATE below.
I think the intended way to do this is not to combine the .gcda files directly but to create independent coverage data files using
lcov -o unittests.coverage -c -d unittests
lcov -o integrationtests.coverage -c -d integrationtests
Each coverage data then represents one "run". You can of course create separate graphs or html views. But you can also combine the data using --add-tracefile, -a for short
lcov -o total.coverage -a unittests.coverage -a integrationtests.coverage
From the total.coverage you can generate the total report, using genhtml for example.
UPDATE: I found that it is actually possible to merge .gcda files directly using gcov-tool, which unfortunately are not easily available on the Mac, so this update doesn't answer the original question.
But with gcov-tool you can even incrementally merge many set together into one:
gcov-tool merge dir1 dir -o dir
gcov-tool merge dir2 dir -o dir
gcov-tool merge dir3 dir -o dir
Although that is not documented and might be risky to rely on.
This is really fast and avoids the round-about way over lcov, which is much slower when merging many sets. Merging some 80 sets of 70 files takes under .5 second on my machine. And you can still do an lcov on the aggregated set, which also is very much faster, should you need it. I use Emacs cov-mode which uses the .gcov files directly.
See this answer for details.
I merge it by lcov multi -d parameters as below. It works.
lcov -c -d ./tmp/ -d ./tmp1/ -o ./tmp/coverage.info
A simpler alternative would be to compile shared C/ObjC files once (generating .o files or better yet, single .a static library) and later linked into each test. In that case gcov will automatically merge results into single .gcno/.gcda pair (beware that tests have to be run serially, to avoid races when accessing gcov files).
Related
What principles should be followed in order not to rebuild some object in Makefile every time?
I only know the most primitive case where we can split the compilation into several steps: creating object modules and linking them. But what to do in more difficult cases? Let's say I have a set of input files and expected output files to test. I want to make it so that only tests on files with wrong output or changed files are re-run.
TEST_INPUT_FILES = $(wildcard $(TEST_DIR)/, *in)
TEST_OUTPUT_FILES = $(wildcard $(TEST_DIR)/, *out)
The above shows how I create lists for each group of files. And in general, how can I be sure that when one file is changed, tests will be passed on this file? Any advice or literature on this topic will be useful, I couldn’t find the answer myself, because everyone disassembles only the banal assembly case
Make creates files from other files using the shell and any program in the shell environment, should the target files not exist or be out of date.
What you'd do is have Make rules running the test with the tested program and any input files, including expected output, then create a test report file. If you want to rerun a failed test you'd remove (clean) the test report prior running the test.
# Make report from test program and inputs
$(REPORT): $(TEST) $(TEST_INPUT) $(TEST_EXPECTED_OUTPUT)
# Remove old report, if any
rm -f $#
# Run test creating report on success
$^
You can also do this by adding report to .DELETE_ON_ERROR: https://www.gnu.org/software/make/manual/make.html#Special-Targets
A code generator is executed from GNU make. The generator produces several files (depending on the input), and only touches the files, when their content change. Therefore a sentinel target needs to be used to record the generator execution time:
GEN_READY : $(gen_input_files)
gen.exe $(gen_input_files)
touch GEN_READY
$(gen_output_files): GEN_READY
It works well, except when a generated file is deleted, but the sentinel file is left in place. Since the sentinel is there, and it's up-to-date, the generator is not executed again.
What is the proper solution to force make to re-run the generator in this case?
Here is one way to group them using an archive:
# create archive of output files from input files passed through gen.exe
GEN_READY.tar: $(gen_input_files)
#echo Generate the files
gen.exe $^
#echo Put generated files in archive
tar -c -f $# $(gen_output_files)
#echo Remove intermediate files (recreated by next recipe)
rm $(gen_output_files)
# Extracting individual files for use as prerequisite or restoration
$(gen_output_files): GEN_READY.tar
#echo Extract one member
tar -x -f $< $#
Since tar (and zip for that matter) allows duplicate entries there could be opportunities updating or appending files in archive instead of rewriting if input-output relation allows.
Edit: Simplified solution.
How can I make lcov and genhtml show files that are not linked / loaded? I'm using it to show test coverage and I would like to see every source file appear in the HTML report, even if it has zero coverage. That way I can use lcov to identify source files that are missing tests. The missing source files have a .gcno file created for them, but not a .gcda file.
If you want to see all files you have to create a baseline coverage data file with -i option.
After capturing you data you have to combine the two files with -a option.
There is an example in lcov man page (https://linux.die.net/man/1/lcov):
Capture initial zero coverage data.
Run lcov with -c and this option on the directories containing .bb,
.bbg or .gcno files before running any test case. The result is a
"baseline" coverage data file that contains zero coverage for every
instrumented line. Combine this data file (using lcov -a) with
coverage data files captured after a test run to ensure that the
percentage of total lines covered is correct even when not all source
code files were loaded during the test.
Recommended procedure when capturing data for a test case:
create baseline coverage data file
lcov -c -i -d appdir -o app_base.info
perform test
appdir/test
create test coverage data file
lcov -c -d appdir -o app_test.info
combine baseline and test coverage data
lcov -a app_base.info -a app_test.info -o app_total.info
You then have to use the app_total.info as source for genhtml.
I have a tool lets say mytool that does some pre-processing on the source files. Basically, it instruments some functions (based on an input list file) in the source files.
The way it is invoked is : (lets say we have two source files - input1.c and input2.c)
./mytool input1.c input2.c --
('--' is for leaving some arguments to default)
Now, I wish to hook this tool to any build process i.e. makefile such that the tool can get all the source files from the makefile and can run on all the source files. So it say there were 3 C files - 1.c, 2.c and 3.c then we would want to do
./mytool 1.c 2.c 3.c --
and then proceed with the usual build process i.e. 'make' in the simplest case.
How can I achieve this? Is this possible by some sort of variable over-riding?
The simplest thing to do, assuming you don't mind the tool running once per .c file (as opposed to once for all .c files) would be to replace the default %: %.c and %.o: %.c built-in make rules (assuming those are in use) and add your tool to the body of those rules.
This only runs the tool for files that need to be re-built from source (as per #Beta's comment on the OP).
I'm building ubuntu-8.04 with gcc 3.4 and I need to generate the .i files, which are the output of the gcc preprocessor. I have tried adding the --save-temps flag but this only generates the .i files for the top level directory, i.e. source, and does not seem to get passed recursively to the child directories. I also tried the -E flag, which is supposed to output preprocessed files and stop compilation, but this did not generate the files either.
I'm specifically looking to generate the .i files for the source in net/core.
Any help is appreciated. Thanks!!
There is no support for bulk preprocessing.
For single file use "make net/core/foo.i"
For bulk, workaround is "make C=2 CHECK="cc -E"".
I know that is an old post, but maybe can be useful; for me this works:
gcc -E filename.c -o outputfile.i