How to filter files in llvm-cov code coverage report? - xcode

From the llvm-cov docs:
llvm-cov show [options] -instr-profile PROFILE BIN [-object BIN,...] [[-object BIN]] [SOURCES]
The llvm-cov show command shows line by line coverage of the binaries
BIN,... using the profile data PROFILE. It can optionally be filtered
to only show the coverage for the files listed in SOURCES.
I'm using the following command:
xcrun llvm-cov show -instr-profile "${PROFDATA}" "${BINARY}" codecov_source_files > Coverage.report
Where codecov_source_files is a file with this line:
*Router.swift
So basically what I want is the report to only contain files with the suffix: Router.swift
But i'm getting a Coverage.report with all the classes in the project.
What am I missing?

It's badly worded but SOURCES is actually a list of file names, not the name of a file containing a list of filenames.
They need to be the paths to the actual source files on disk. It doesn't support wildcards or regex unfortunately.
Edit: By reading the source I have discovered that you can also list directories as SOURCES and it will recurse into them. Also there is an undocumented option -dump-collected-paths which prints the files the SOURCES match.

you can use help to look up supported commands
$ llvm-cov show --help
$ llvm-cov report --help
Maybe the following command is the function you want
--ignore-filename-regex=<string> - Skip source code files with file paths that match the given regular expression

Related

gcov/lcov - How to exclude all but one directory from coverage data

I am creating code coverage reports for my C++ projects using gcov/lcov, and I am trying to remove all files except the ones in a certain directory from the coverage report (i.e. I do not want different dependencies in various folders to show up in the report).
However I want to do this automatically and not manually. I tried the following:
lcov -r coverage.total '!(<path>)' -o coverage.info
But then lcov comes back with Deleted 0 files. I also tried !(<path>), '[^path]*' and slight variations of these but nothing seems to work. I can manually remove the undesired folders for example the following does work:
lcov -r coverage.total '/usr/libs/*' '/usr/mylibs/*' -o coverage.info
So my question is, how can I have lcov exclude all but a specific directory?
P.S.
I am open to workarounds (for example if this can be done with a bash script)
I am using bash+CMake+gcov+lcov
P.S.
This is not a duplicate of this question. I am asking about an automated way to only include files in a specific directory in the report. (for example the current directory) I am aware of the --remove argument but that is not an automated solution.
Your help is greately appreciated!

How to merge coverage reports?

I have a C program which I compile with -fprofile-arcs -ftest-coverage flags.Then I run the program on 5 different inputs, this will override the .gcda file and give me a combined report.But I want to have the coverage report of individual tests and store them in a folder and when I run any coverage tool on this folder I get report for each test as well as a combined report.Is there a way to do this?
Both gcovr and lcov can merge coverage data from multiple runs, but gcov has no built-in functionality.
Gcovr 5.0 added the -a/--add-tracefile option which can be used to merge multiple coverage runs. After each test, use gcovr to create a JSON report. Afterwards, you can use gcovr -a cov1.json -a cov2.json to merge multiple coverage data sets and generate a report in the format of your choosing. You can add as many input JSON files as you want, and use a glob pattern (like gcovr -a 'coverage-*.json') if you have many files.
You can also consider whether using the lcov tool with its --add-tracefile option would work: You can run lcov after each test to generate an lcov-tracefile (which you can turn into a HTML report with genhtml). Afterwards, you can merge the tracefiles into a combined report. It is not possible to use lcov's tracefiles with gcovr.
To add to another answer, gcov can also merge coverage data from multiple runs with the help of gcov-tool:
$ gcov-tool merge dir1 dir2
(by default results will be stored into merged_profile folder).
Unfortunately gcov-tool allows merging only two profiles at a time but you can use gcov-tool-many to work around this.

How to extract a specific folder using IZARC (IZARCe)

I want to extract a specific directory form a huge zip file (>5GB) that is somewhat corrupted because of an inevitable bad maintained build system that creates the zip.
The tools such as winrar/7Zip GUI apps have no issues extracting the files, but some command line tools such as mks unzip and 7za fails to extract from the corrupted archive.
After a lot of digging around and trying out many such command line utilities I found out that IZARC successfully extracts files from the archive.
I am running the following command:
IZARCe.exe -e -d -o D:\aHugeZipFile.zip -pD:\temp #"source.txt"
The listing file source.txt contains just one entry:
source/lib/*
which is the only directory in the archive, from where the contents are to be extracted.
But, it is resulting in:
IZArc Command Line Extraction Add-On Version 1.1 (Build: 130)
Copyright(c) 2007 Ivan Zahariev, All Rights Reserved.
http://www.izarc.org contact#izarc.org
Archive File: aHugeZipFile.zip
WARNING: Nothing to do!
I have tried specifying:
/source/lib/*
source/lib/*
source/lib/
source/lib
*source/lib/*
in the listing file, all to no avail! :(
Any pointers on where the error is occurring, and how to fix the issue will be of great help. Thank you in advance!
Using relative or absolute paths for listfiles doesn't appear to work with IZArc. Try using wildcards such as ., *.doc, etc instead of paths in the listfile. Be aware that there appears to be a limitation for the folder depth that IZArc will extract to as well as a tendency to generate CRC errors when files with the same name are present in the same archive, even if they are in different directories.
I would suggest using 7-Zip command-line instead. It can recurse deeply through a file structure without error and can use relative directories and wildcards in its listfiles.
The following 7-Zip command was tested and worked perfectly.
7za x somearchive.zip -o"C:\Documents and Settings\me\desktop\temp_folder\test2" -ir#source.txt -aoa -scsWIN
the source.txt file may contain contain a combination of relative paths and/or wildcards on separate lines such as:
Output/, Folder2/, *, or *.doc.
In the command above: x (extract with full paths), -ir (include filenames, recurse subdirectories), -aoa (overide existing files without prompt), -scsWIN (set charset for list files). You may need to adjust these commands for your situation.

Finding and Removing Unused Files Through Command Line

My websites file structure has gotten very messy over the years from uploading random files to test different things out. I have a list of all my files such as this:
file1.html
another.html
otherstuff.php
cool.jpg
whatsthisdo.js
hmmmm.js
Is there any way I can input my list of files via command line and search the contents of all the other files on my website and output a list of the files that aren't mentioned anywhere on my other files?
For example, if cool.jpg and hmmmm.js weren't mentioned in any of my other files then it could output them in a list like this:
cool.jpg
hmmmm.js
And then any of those other files mentioned above aren't listed because they are mentioned somewhere in another file. Note: I don't want it to just automatically delete the unused files, I'll do that manually.
Also, of course I have multiple folders so it will need to search recursively from my current location and output all the unused (unreferenced) files.
I'm thinking command line would be the fastest/easiest way, unless someone knows of another. Thanks in advance for any help that you guys can be!
Yep! This is pretty easy to do with grep. In this case, you would run a command like:
$ for orphan in `cat orphans.txt`; do \
echo "Checking for presence of ${orphan} in present directory..." ;
grep -rl $orphan . ; done
And orphans.txt would look like your list of files above, one file per line. You can add -i to the grep above if you want to grep case-insensitively. And you would want to run that command in /var/www or wherever your distribution keeps its webroots. If, after you see the above "Checking for..." and no matches below, you haven't got any files matching that name.

How to set Sphinx's `exclude_patterns` from the command line?

I'm using Sphinx on Windows.
Most of my documentation is for regular users, but there are some sub-pages with content for administrators only.
So I want to build two versions of my documentation: a complete version, and a second version with the "admin" pages excluded.
I used the exclude_patterns in the build configuration for that.
So far, it works. Every file in every subfolder whose name contains "admin" is ignored when I put this into the conf.py file:
exclude_patterns = ['**/*admin*']
The problem is that I'd like to run the build once to get both versions.
What I'm trying to do right now is running make.bat twice and supply different parameters on each run.
According to the documentation, I can achieve this by setting the BUILDDIR and SPHINXOPTS variables.
So now I have a build.bat that looks like this:
path=%path%;c:\python27\scripts
rem BUILD ADMIN DOCS
set SPHINXOPTS=
set BUILDDIR=c:\build\admin
call make clean
call make html
rem BUILD USER DOCS
set SPHINXOPTS=-D exclude_patterns=['**/*admin*']
set BUILDDIR=c:\build\user
call make clean
call make html
pause
The build in the two different directories works when I delete the line set BUILDDIR=build from the sphinx-generated make.bat file.
However, the exclude pattern does not work.
The batch file listed above outputs this for the second build (the one with the exclude pattern):
Making output directory...
Running Sphinx v1.1.3
loading translations [de]... done
loading pickled environment... not yet created
Exception occurred:
File "C:\Python27\lib\site-packages\sphinx-1.1.3-py2.7.egg\sphinx\environment.
py", line 495, in find_files
['**/' + d for d in config.exclude_dirnames] +
TypeError: coercing to Unicode: need string or buffer, list found
The full traceback has been saved in c:\users\myusername\appdata\local\temp\sphinx-err-kmihxk.log, if you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error message can be provided next time.
Either send bugs to the mailing list at <http://groups.google.com/group/sphinx-dev/>,
or report them in the tracker at <http://bitbucket.org/birkenfeld/sphinx/issues/>.
What am I doing wrong?
Is the syntax for exclude_patterns in the sphinx-build command line different than in the conf.py file?
Or is there a better way to build two different versions in one step?
My first thought was that this was a quoting issue, quoting being notoriously difficult to get right on the Windows command line. However, I wasn't able to come up with any combination of quoting that changed the behavior at all. (The problem is easy to replicate)
Of course it could still just be some quoting issue I'm not smart enough to figure out, but I suspect this is a Sphinx bug of some kind, and hope you will report it to the Sphinx developers.
In the meantime, here's an alternate solution:
quoting from here:
There is a special object named tags available in the config file. It can be used to query and change the tags (see Including content based on tags). Use tags.has('tag') to query, tags.add('tag') and tags.remove('tag') to change
This allows you to essentially pass flags into the conf.py file from the command line, and since the conf.py file is just Python, you can use if statements to set the value of exclude_patterns conditionally based on the tags you pass in.
For example, you could pass Sphinx options like:
set SPHINXOPTS=-t foradmins
to pass the "foradmins" tag, and then check for it in your conf.py like so:
exclude_patterns = blah
if tags.has('foradmins'):
exclude_patterns = []
That should allow you to do what you want. Good Luck!

Resources