Go Coverage over multiple package and Gitlab Coverage Badge - go

I'm trying to display an accurate coverage badge for my gitlab project.
Thing is I have several packages, in gitlab-ci.yml, I run
go test $(go list ./... | grep -v /vendor/) -v -coverprofile .testCoverage.txt
and my output is something like that:
$ go test -coverprofile=coverage.txt -covermode=atomic ./...
ok gitlab.com/[MASKED]/pam 10.333s coverage: 17.2% of statements
ok gitlab.com/[MASKED]/pam/acquisition 0.004s coverage: 57.7% of statements
ok gitlab.com/[MASKED]/pam/acquisition/api 0.005s coverage: 72.1% of statements
ok gitlab.com/[MASKED]/pam/acquisition/ftp 24.936s coverage: 73.1% of statements
ok gitlab.com/[MASKED]/pam/repartition 0.004s coverage: 90.1% of statements
And my Test coverage parsing regex in Gitlab is:
^coverage:\s(\d+(?:\.\d+)?%)
If I check the .testCoverage, I get a lot of lines like that:
gitlab.com/[MASKED]/pam/repartition/repartition.go:54.33,56.5 1 1
So, it gives me a result of 90.1% when it is only the coverage of the last package.
How should I do it ?

According to this answer,
I just needed another command:
go tool cover -func profile.cov
That will give you the result:
✗ go tool cover -func profile.cov
gitlab.com/[MASKED]/pam/acquisition/acquisition.go:17: FetchAll 0.0%
gitlab.com/[MASKED]/pam/acquisition/acquisition.go:32: TransformData 100.0%
gitlab.com/[MASKED]/pam/acquisition/acquisition_mocks.go:13: FetchMeters 0.0%
gitlab.com/[MASKED]/pam/repartition/repartition.go:102: GroupMetersByOperation 100.0%
gitlab.com/[MASKED]/pam/repartition/repartition.go:111: SetProrataRedistributed 71.4%
total: (statements) 68.7%
In gitlab, you can change the regex:
^coverage:\s(\d+(?:\.\d+)?%)
By
\(statements\)(?:\s+)?(\d+(?:\.\d+)?%)
Now, if you have mocks, coverage will include them, so you must remove them following this answer:
go test . -coverprofile profile.cov.tmp
cat profile.cov.tmp | grep -v "_mocks.go" > cover.out
tool cover -func profile.cov
Off course, all your mocks should be in files with suffix : _mocks.go
And it should work.
Hope it helps others!

Related

go test coverprofile cannot find package

When I try the following
go test -coverprofile=coverage.out
I get this coverage.out:
mode: set
_/Users/gert/Desktop/httx/auth.go:10.66,11.54 1 0
_/Users/gert/Desktop/httx/auth.go:11.54,13.89 2 0
_/Users/gert/Desktop/httx/auth.go:17.3,17.11 1 0
_/Users/gert/Desktop/httx/auth.go:13.89,16.4 2 0
_/Users/gert/Desktop/httx/auth.go:22.42,25.2 2 0
...
But when I then do
go tool cover -func=coverage.out
coverage.out doesn't seem to be correctly formatted?
cover: can't find "auth.go": cannot find package "_/Users/gert/Desktop/httx/" in any of:
/usr/local/Cellar/go/1.7.1/libexec/src/_/Users/gert/Desktop/httx (from $GOROOT)
/Users/gert/go/src/_/Users/gert/Desktop/httx (from $GOPATH)
EDIT: Note that go test -cover works.
PASS
coverage: 29.7% of statements
ok _/Users/gert/Desktop/httx 0.015s
Your package /Users/gert/Desktop/httx/ is outside of your $GOPATH which is /Users/gert/go. Move the httx package somewhere under your $GOPATH and it will work fine. You could move it to /Users/gert/go/src/httx or probably something like /Users/gert/go/src/github.com/your-github-name/httx (assuming you use GitHub).
see use relative path
change coverage path to ./ :
sed -i "s/_$(pwd|sed 's/\//\\\//g')/./g" coverage.out

Go Benchmark how does it work

I've got my Go benchmark working with my API calls but I'm not exactly sure what it means below:
$ go test intapi -bench=. -benchmem -cover -v -cpuprofile=cpu.out
=== RUN TestAuthenticate
--- PASS: TestAuthenticate (0.00 seconds)
PASS
BenchmarkAuthenticate 20000 105010 ns/op 3199 B/op 49 allocs/op
coverage: 0.0% of statements
ok intapi 4.349s
How does it know how many calls it should make? I do have a loop with b.N as size of the loop but how does Golang know how many to run?
Also I now have cpu profile file. How can I use this to view it?
From TFM:
The benchmark function must run the target code b.N times. The benchmark package will vary b.N until the benchmark function lasts long enough to be timed reliably.

does anybody have a simple pprof use on a go-executable?

I have looked at the article about profiling go programs, and I simple do not understand it. Do someone have a simple code example were the performance of code snippet is logged in text file by a profile-"object"?
Here are the commands I use for a simple CPU and memory profiling to get you started.
Let's say you made a benchmark function like this :
File something_test.go :
func BenchmarkProfileMe(b *testing.B) {
// execute the significant portion of the code you want to profile b.N times
}
In a shell script:
# -test XXX is a trick so you don't trigger other tests by asking a non existent specific test called literally XXX
# you can adapt the benchtime depending on the type of code you want to profile.
go test -v -bench ProfileMe -test.run XXX -cpuprofile cpu.pprof -memprofile mem.pprof -benchtime 10s
go tool pprof --text ./something.test cpu.pprof ## To get a CPU profile per function
go tool pprof --text ./something.test cpu.pprof --lines ## To get a CPU profile per line
go tool pprof --text ./something.test mem.pprof ## To get the memory profile
It will present you the hottests spots in each cases on the console.

Ruby Test Unit: Multiple Scripts, One Output

Can I run multiple Test Cases from multiple scripts but have a single output that either says "100% Pass" or "X Failed" and lists out the failed tests?
For example I want to see something like:
>runtests.rb all #runs all the scripts in the directory
Finished in 4.523 Seconds
100% Pass
>runtests.rb category #runs all the scripts in a specified sub-directory
Finished in 2.1 Seconds
2 Failed:
test_my_test
test_my_test_2
1 Error:
test_my_test_3
I use the built-in MiniTest::Unit along with the autotest command that is part of ZenTest and get output like:
autotest
/Users/tinman/.rvm/rubies/ruby-1.9.2-p290/bin/ruby -I.:lib:test -rubygems -e "%w[test/unit tests/test_domains.rb tests/test_regex.rb tests/test_vlan.rb tests/test_nexus.rb tests/test_switch.rb tests/test_template.rb].each { |f| require f }"
Loaded suite -e
Started
........................................
Finished in 0.143375 seconds.
40 tests, 276 assertions, 0 failures, 0 errors, 0 skips
Test run options: --seed 62474
Is that similar to what you are talking about?

How to compare results of two RSpec suite runs?

I have a pretty big spec suite (watirspec), I am running it against a Ruby gem (safariwatir) and there are a lot of failures:
1002 examples, 655 failures, 1 pending
When I make a change in the gem and run the suite again, sometimes a lot of previously failing specs pass (52 in this example):
1002 examples, 603 failures, 1 pending
I would like to know which previously failing specs are now passing, and of course if any of the previously passing specs are now failing. What I do now to compare the results is to run the tests with --format documentation option and output the results to a text file, and then diff the files:
rspec --format documentation --out output.txt
Is there a better way? Comparing text files is not the easiest way to see what changed.
Just save the results to file like you're doing right now and then just diff those results with some random diff-ing tool.
I don't know of anything out there that can do exactly that. Said that, if you need it so badly you don't mind spending some time hacking your own formatter, take a look at Spec::Runner::Formatter::BaseFormatter.It is pretty well documented.
I've implemented #Serabe's solution for you. See the gist: https://gist.github.com/1142145.
Put the file my_formatter.rb into your spec folder and run rspec --formatter MyFormatter. The formatter will compare current run result with previous run result and will output the difference as a table.
NOTE: The formatter creates/overwrites file result.txt in the current folder.
Example usage:
D:\Projects\ZPersonal\equatable>rspec spec --format MyFormatter
..........
No changes since last run
Finished in 0.011 seconds
10 examples, 0 failures
No changes since last run line was added by the formatter.
And now I intentionally broken one and rerun rspec:
D:\Projects\ZPersonal\equatable>rspec spec --format MyFormatter
..F.......
Affected tests (1).
PS CS Description
. F Equatable#== should be equal to the similar sock
PS - Previous Status
CS - Current Status
Failures:
1) Equatable#== should be equal to the similar sock
Failure/Error: subject.should == Sock.new(10, :black, 0)
expected: #<Sock:0x2fbb930 #size=10, #color=:black, #price=0>
got: #<Sock:0x2fbbae0 #size=10, #color=:black, #price=20> (using ==)
Diff:
## -1,2 +1,2 ##
-#<Sock:0x2fbb930 #color=:black, #price=0, #size=10>
+#<Sock:0x2fbbae0 #color=:black, #price=20, #size=10>
# ./spec/equatable_spec.rb:30:in `block (3 levels) in <top (required)>'
Finished in 0.008 seconds
10 examples, 1 failure
Failed examples:
rspec ./spec/equatable_spec.rb:29 # Equatable#== should be equal to the similar sock
The table with affected specs was added by the formatter:
Affected tests (1).
PS CS Description
. F Equatable#== should be equal to the similar sock
PS - Previous Status
CS - Current Status
If some spec status is different between current and previous run, the formatter outputs previous status, current status and spec description. '.' stands for passed specs, 'F' for failed and 'P' for pending.
The code is far from perfect, so feel free to criticize and change it as you want.
Hope this helps. Let me know if you have any questions.

Resources