gcov/lcov - How to exclude all but one directory from coverage data - bash

I am creating code coverage reports for my C++ projects using gcov/lcov, and I am trying to remove all files except the ones in a certain directory from the coverage report (i.e. I do not want different dependencies in various folders to show up in the report).
However I want to do this automatically and not manually. I tried the following:
lcov -r coverage.total '!(<path>)' -o coverage.info
But then lcov comes back with Deleted 0 files. I also tried !(<path>), '[^path]*' and slight variations of these but nothing seems to work. I can manually remove the undesired folders for example the following does work:
lcov -r coverage.total '/usr/libs/*' '/usr/mylibs/*' -o coverage.info
So my question is, how can I have lcov exclude all but a specific directory?
P.S.
I am open to workarounds (for example if this can be done with a bash script)
I am using bash+CMake+gcov+lcov
P.S.
This is not a duplicate of this question. I am asking about an automated way to only include files in a specific directory in the report. (for example the current directory) I am aware of the --remove argument but that is not an automated solution.
Your help is greately appreciated!

Related

Aggregating `bazel test` reports when testing many targets

I am trying to aggregate all the test.xml reports generated after a bazel test run. The idea is to then upload this full report to a CI platform with a nicer interface.
Consider the following example
$ find .
foo/BUILD
bar/BUILD
$ bazel test //...
This might generate
./bazel-testlogs/foo/tests/test.xml
./bazel-testlogs/foo/tests/... # more
./bazel-testlogs/bar/tests/test.xml
./bazel-testlogs/bar/tests/... # more
I would love to know if there is a better way to aggregate these test.xml files into a single report.xml file (or the equivalent). This way I only need to publish 1 report file.
Current solution
The following is totally viable, I just want to make sure I am not missing some obvious built in feature.
find ./bazel-testlogs | grep 'test.xml' | xargs [publish command]
In addition, I will check out the JUnit output format, and see if just concatenating the reports is sufficient. This might work much better.

How to merge coverage reports?

I have a C program which I compile with -fprofile-arcs -ftest-coverage flags.Then I run the program on 5 different inputs, this will override the .gcda file and give me a combined report.But I want to have the coverage report of individual tests and store them in a folder and when I run any coverage tool on this folder I get report for each test as well as a combined report.Is there a way to do this?
Both gcovr and lcov can merge coverage data from multiple runs, but gcov has no built-in functionality.
Gcovr 5.0 added the -a/--add-tracefile option which can be used to merge multiple coverage runs. After each test, use gcovr to create a JSON report. Afterwards, you can use gcovr -a cov1.json -a cov2.json to merge multiple coverage data sets and generate a report in the format of your choosing. You can add as many input JSON files as you want, and use a glob pattern (like gcovr -a 'coverage-*.json') if you have many files.
You can also consider whether using the lcov tool with its --add-tracefile option would work: You can run lcov after each test to generate an lcov-tracefile (which you can turn into a HTML report with genhtml). Afterwards, you can merge the tracefiles into a combined report. It is not possible to use lcov's tracefiles with gcovr.
To add to another answer, gcov can also merge coverage data from multiple runs with the help of gcov-tool:
$ gcov-tool merge dir1 dir2
(by default results will be stored into merged_profile folder).
Unfortunately gcov-tool allows merging only two profiles at a time but you can use gcov-tool-many to work around this.

knit Rmarkdown moderncv to pdf using makefile with sty file in subdirectory

I am using the moderncv class to create a CV in Rmarkdown. In order to make the cv reproducible out of the box I have included the .cls and .sty files in the root directory. However, in an effort to keep the root directory uncluttered I would prefer to keep all the moderncv related files in a subdirectory (assets/tex/). I am able to access the .cls file using a relative path in the yaml front matter, but I am not able to access the .sty files unless they are in the root directory.
Searching previous questions on stackoverflow I learned the following: (1) keeping .cls and .sty files in nested directories is not recommended. I understand this and would like to do it anyway so that other people can fork my project and be able to knit the cv without having to deal with finding their texmk folder. (2) the solution to my problem seems to involve setting the TEXINPUTS using a Makefile (see this thread and another thread)
I am not very good with Makefiles, but I have managed to get one working that will knit my .Rmd file to pdf without problems, so long as the .sty files are still in root. This is what it looks like currently:
PDF_FILE=my_cv.pdf
all : $(PDF_FILE)
echo All files are now up to date
clean :
rm -f $(PDF_FILE)
%.pdf : %.Rmd
Rscript -e 'rmarkdown::render("$<")'
My understanding is that I can set the TEXINPUTS using:
export TEXINPUTS=".:./assets/tex:"
Where "assets/tex" represents the subdirectory where the .sty files are located. I do not know how to incorporate the above code into my makefile so that the .sty files are recognized in the subdirectories and my .Rmd is knit to PDF. In its current state, I get the following error if I remove the .sty files from root and put then in the aforementioned subdirectory:
! LaTeX Error: Command \fax already defined.
Or name \end... illegal, see p.192 of the manual.
which I assume is occurring because the moderncv class needs---and cannot locate---the relevant .sty files.
You could try to define the environment variable in the make rule:
%.pdf : %.Rmd
export TEXINPUTS=".:./assets/tex:"
Rscript -e 'rmarkdown::render("$<")'
Or you could set the environment variable in a set-up chunk in your Rmd file:
```{r setup, include = FALSE}
Sys.setenv(TEXINPUTS=".:./assets/tex:")
```
Note: Not tested due to lack of minimal example.

How to filter files in llvm-cov code coverage report?

From the llvm-cov docs:
llvm-cov show [options] -instr-profile PROFILE BIN [-object BIN,...] [[-object BIN]] [SOURCES]
The llvm-cov show command shows line by line coverage of the binaries
BIN,... using the profile data PROFILE. It can optionally be filtered
to only show the coverage for the files listed in SOURCES.
I'm using the following command:
xcrun llvm-cov show -instr-profile "${PROFDATA}" "${BINARY}" codecov_source_files > Coverage.report
Where codecov_source_files is a file with this line:
*Router.swift
So basically what I want is the report to only contain files with the suffix: Router.swift
But i'm getting a Coverage.report with all the classes in the project.
What am I missing?
It's badly worded but SOURCES is actually a list of file names, not the name of a file containing a list of filenames.
They need to be the paths to the actual source files on disk. It doesn't support wildcards or regex unfortunately.
Edit: By reading the source I have discovered that you can also list directories as SOURCES and it will recurse into them. Also there is an undocumented option -dump-collected-paths which prints the files the SOURCES match.
you can use help to look up supported commands
$ llvm-cov show --help
$ llvm-cov report --help
Maybe the following command is the function you want
--ignore-filename-regex=<string> - Skip source code files with file paths that match the given regular expression

Join multiple Coffeescript files into one file? (Multiple subdirectories)

I've got a bunch of .coffee files that I need to join into one file.
I have folders set up like a rails app:
/src/controller/log_controller.coffee
/src/model/log.coffee
/src/views/logs/new.coffee
Coffeescript has a command that lets you join multiple coffeescripts into one file, but it only seems to work with one directory. For example this works fine:
coffee --output app/controllers.js --join --compile src/controllers/*.coffee
But I need to be able to include a bunch of subdirectories kind of like this non-working command:
coffee --output app/all.js --join --compile src/*/*.coffee
Is there a way to do this? Is there a UNIXy way to pass in a list of all the files in the subdirectories?
I'm using terminal in OSX.
They all have to be joined in one file because otherwise each separate file gets compiled & wrapped with this:
(function() { }).call(this);
Which breaks the scope of some function calls.
From the CoffeeScript documentation:
-j, --join [FILE] : Before compiling, concatenate all scripts together in the order they were passed, and write them into the specified file. Useful for building large projects.
So, you can achieve your goal at the command line (I use bash) like this:
coffee -cj path/to/compiled/file.js file1 file2 file3 file4
where file1 - fileN are the paths to the coffeescript files you want to compile.
You could write a shell script or Rake task to combine them together first, then compile. Something like:
find . -type f -name '*.coffee' -print0 | xargs -0 cat > output.coffee
Then compile output.coffee
Adjust the paths to your needs. Also make sure that the output.coffee file is not in the same path you're searching with find or you will get into an infinite loop.
http://man.cx/find |
http://www.rubyrake.org/tutorial/index.html
Additionally you may be interested in these other posts on Stackoverflow concerning searching across directories:
How to count lines of code including sub-directories
Bash script to find a file in directory tree and append it to another file
Unix script to find all folders in the directory
I've just release an alpha release of CoffeeToaster, I think it may help you.
http://github.com/serpentem/coffee-toaster
The most easy way to use coffee command line tool.
coffee --output public --join --compile app
app is my working directory holding multiple subdirectories and public is where ~output.js file will be placed. Easy to automate this process if writing app in nodejs
This helped me (-o output directory, -j join to project.js, -cw compile and watch coffeescript directory in full depth):
coffee -o web/js -j project.js -cw coffeescript
Use cake to compile them all in one (or more) resulting .js file(s). Cakefile is used as configuration which controls in which order your coffee scripts are compiled - quite handy with bigger projects.
Cake is quite easy to install and setup, invoking cake from vim while you are editing your project is then simply
:!cake build
and you can refresh your browser and see results.
As I'm also busy to learn the best way of structuring the files and use coffeescript in combination with backbone and cake, I have created a small project on github to keep it as a reference for myself, maybe it will help you too around cake and some basic things. All compiled files are in www folder so that you can open them in your browser and all source files (except for cake configuration) are in src folder. In this example, all .coffee files are compiled and combined in one output .js file which is then included in html.
Alternatively, you could use the --bare flag, compile to JavaScript, and then perhaps wrap the JS if necessary. But this would likely create problems; for instance, if you have one file with the code
i = 0
foo = -> i++
...
foo()
then there's only one var i declaration in the resulting JavaScript, and i will be incremented. But if you moved the foo function declaration to another CoffeeScript file, then its i would live in the foo scope, and the outer i would be unaffected.
So concatenating the CoffeeScript is a wiser solution, but there's still potential for confusion there; the order in which you concatenate your code is almost certainly going to matter. I strongly recommend modularizing your code instead.

Resources