How to understand bazel coverage coverage.dat file? - go

Hello I have been searching the internet a lot but I cannot find an easy way to generate a html coverage file for our golang project but at least I can get a coverage.dat file which looks like this
mode: set
path/foo.go:13.58,15.2 1 1
path/foo.go:17.56,19.2 1 1
I couldn't even find the document for this file, what do these numbers mean? It looks like line number but not quite.
My Bazel version is 2.2.0

It looks like line number but not quite
fields are: name.go:line.column,line.column numberOfStatements count
Source
You can generate html with next command:
# Generate coverage profile using Cover tool
> go test -coverprofile=coverage.out ./..
# To analyze coverage via a browser
> go tool cover -html=coverage.out
Last command will generate html file under /tmp/coverXXXXX/coverage.html

Related

Aggregating `bazel test` reports when testing many targets

I am trying to aggregate all the test.xml reports generated after a bazel test run. The idea is to then upload this full report to a CI platform with a nicer interface.
Consider the following example
$ find .
foo/BUILD
bar/BUILD
$ bazel test //...
This might generate
./bazel-testlogs/foo/tests/test.xml
./bazel-testlogs/foo/tests/... # more
./bazel-testlogs/bar/tests/test.xml
./bazel-testlogs/bar/tests/... # more
I would love to know if there is a better way to aggregate these test.xml files into a single report.xml file (or the equivalent). This way I only need to publish 1 report file.
Current solution
The following is totally viable, I just want to make sure I am not missing some obvious built in feature.
find ./bazel-testlogs | grep 'test.xml' | xargs [publish command]
In addition, I will check out the JUnit output format, and see if just concatenating the reports is sufficient. This might work much better.

Creating a production ready binary from Julia code

I have a Julia program that inputs a csv and transforms the data via a bunch of functions, and outputs a csv file. I want to turn this into a binary so that I can run on different machines without having the source code on different machines.
I am looking at PackageCompiler.jl, but I can't find any understandable documentation for creating a binary app. I am trying:
using PackageCompiler
#time create_app("JuliaPrograms", "test"; precompile_execution_file="script.jl")
The file that contains all my code is script.jl and it lives in the dir JuliaPrograms, and I want the compiled binary to be named test.
When I run julia script.jl it performs as I want. I want to be able to run ./test with the same result.
However, I get this error:
ERROR: could not find project at "/Users/userx/JuliaPrograms/"
What am I doing wrong? Do I need some special project directory?
Per the docs here: https://julialang.github.io/PackageCompiler.jl/dev/apps.html#Creating-an-app-1 you need to make sure you define:
function julia_main()::Cint
# do something based on ARGS?
return 0 # if things finished successfully
end
a function called julia_main as the entry point to the app. You can find an example app here: https://github.com/JuliaLang/PackageCompiler.jl/tree/master/examples/MyApp
You may also want to check the location of the code itself. Is it being saved at "/Users/userx/JuliaPrograms/"? You can switch your directory in the Julia Reply by typing ; which will enter you into shell mode and then you can cd into the directory where your code is.

gcov/lcov - How to exclude all but one directory from coverage data

I am creating code coverage reports for my C++ projects using gcov/lcov, and I am trying to remove all files except the ones in a certain directory from the coverage report (i.e. I do not want different dependencies in various folders to show up in the report).
However I want to do this automatically and not manually. I tried the following:
lcov -r coverage.total '!(<path>)' -o coverage.info
But then lcov comes back with Deleted 0 files. I also tried !(<path>), '[^path]*' and slight variations of these but nothing seems to work. I can manually remove the undesired folders for example the following does work:
lcov -r coverage.total '/usr/libs/*' '/usr/mylibs/*' -o coverage.info
So my question is, how can I have lcov exclude all but a specific directory?
P.S.
I am open to workarounds (for example if this can be done with a bash script)
I am using bash+CMake+gcov+lcov
P.S.
This is not a duplicate of this question. I am asking about an automated way to only include files in a specific directory in the report. (for example the current directory) I am aware of the --remove argument but that is not an automated solution.
Your help is greately appreciated!

How to merge coverage reports?

I have a C program which I compile with -fprofile-arcs -ftest-coverage flags.Then I run the program on 5 different inputs, this will override the .gcda file and give me a combined report.But I want to have the coverage report of individual tests and store them in a folder and when I run any coverage tool on this folder I get report for each test as well as a combined report.Is there a way to do this?
Both gcovr and lcov can merge coverage data from multiple runs, but gcov has no built-in functionality.
Gcovr 5.0 added the -a/--add-tracefile option which can be used to merge multiple coverage runs. After each test, use gcovr to create a JSON report. Afterwards, you can use gcovr -a cov1.json -a cov2.json to merge multiple coverage data sets and generate a report in the format of your choosing. You can add as many input JSON files as you want, and use a glob pattern (like gcovr -a 'coverage-*.json') if you have many files.
You can also consider whether using the lcov tool with its --add-tracefile option would work: You can run lcov after each test to generate an lcov-tracefile (which you can turn into a HTML report with genhtml). Afterwards, you can merge the tracefiles into a combined report. It is not possible to use lcov's tracefiles with gcovr.
To add to another answer, gcov can also merge coverage data from multiple runs with the help of gcov-tool:
$ gcov-tool merge dir1 dir2
(by default results will be stored into merged_profile folder).
Unfortunately gcov-tool allows merging only two profiles at a time but you can use gcov-tool-many to work around this.

Setting up custom build environment

I am trying to set up my custom build environment for Sublime Text 3 for competitive Programming.
My target is as follows :
Build the current source file
Run it and read inputs from a file input.in
Write output to a file output.out
diff expected output.out. expected file contains the expected output
This is how the window setup looks like
This is my json file for the build system
{
"cmd": ["g++ -std=c++11 ${file} -o ${file_path}/${file_base_name} && ${file_path}/${file_base_name}<${file_path}/input.in>${file_path}/output.out && diff output.out expected"],
"shell":true
}
So far steps 1-3 are working as expected. But for last step 4, I am not able to get the result in proper suitable format. e.g. when files match there is no output (as diff generates nothing in case of match) and in case of non-match, this build system is generating output in non human-readable format.
5a6
> f
[Finished in 0.1s with exit code 1]
Can anyone suggest a better way to output the result or is there a way to use linux's notification utility

Resources