Codeclimate test coverage formatter for Golang - go

Nowhere in Codeclimate docs written how to specify coverage formatter. But when I'm trying to send coverage to Codeclimate:
./cc-test-reporter before-build
./cc-test-reporter after-build
It is failing:
Error: could not find any viable formatter. available formatters: simplecov, lcov, coverage.py, clover, gocov, gcov, cobertura, jacoco
I have gocov installed. Also I generated a report with goconv:
gocov test -coverprofile=out
And I tried to specify the report file to Codeclimate in various ways:
./cc-test-reporter after-build out
./cc-test-reporter after-build < out
But had no luck...
I haven't found any formatter related directives for .codeclimate.yml file. The doc is written in super "you know" style so it didn't help. How to enable/send test coverage with Codeclimate?

Export var:
CC_TEST_REPORTER_ID=...
Run:
for pkg in $(go list ./... | grep -v vendor); do
go test -coverprofile=$(echo $pkg | tr / -).cover $pkg
done
echo "mode: set" > c.out
grep -h -v "^mode:" ./*.cover >> c.out
rm -f *.cover
./cc-test-reporter after-build

Abby from Code Climate Support here. Currently, the test-reporter tries to infer where the coverage data could be from a set of default paths. This is usually enough for users following the default setup of the coverage tool.
But, in the case where the output is not located on one of those paths, you can use the test-reporter low-level commands to indicate the
type of coverage data and where it is. In this particular case you would do something like:
export CC_TEST_REPORTER_ID=<repo-test-reporter-id>
./cc-test-reporter before-build
gocov test -coverprofile=out
./cc-test-reporter format-coverage --input-type gocov out
./cc-test-reporter upload-coverage
You can check more on how to use a test-reporter command by using the flag --help. For example, ./cc-test-reporter format-coverage --help.
That should get you in a good place. If not, let us know at hello#codeclimate.com or here, and I can get more insight. :)

For those of you that still have this problem:
For me - the problem was that I had incorrectly named the output coverage file as "cp.out" instead of "c.out" which is what the cc-test-reporter expects.
Here's my working script:
#!/bin/sh
inputFile=$1
while IFS="=" read -r repo reporterID
do
cd "$HOME""/go/src/github.com/""$repo" || exit
echo "performing code coverage upload for ""$repo"
git checkout master && git pull ##don't know if main is used
cp -f "$HOME"/shellScripts/test-reporter-latest-darwin-amd64 .
chmod 755 test-reporter-latest-darwin-amd64
./test-reporter-latest-darwin-amd64 before-build
export CC_TEST_REPORTER_ID="$reporterID"
go test -coverprofile=c.out ./...
./test-reporter-latest-darwin-amd64 after-build --prefix "github.com/""$repo"
echo
echo "======================="
echo
done < "$inputFile"

Related

gitlab runner throws "Cleaning up file based variables 00:01 ERROR: Job failed: exit code 1" at the end

Even though all my steps pass successfully , Gitlab CI shows this -
"Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1"
and fails the job at the very end . Also interestingly , this only happens for my master branch . It runs successfully on other branches. Has anyone faced this issue and found a resolution ?
- >
for dir in $(git log -m -1 --name-only -r --pretty="format:" "$CI_COMMIT_SHA"); do
if [[ -f "$dir" ]]; then
SERVICE=$(echo "$dir")
# helm install the service
fi
done
- echo "deployed"
Overview
This drove me crazy and I'm still not sure what the appropriate answer is. I just ran into this issue myself and sunk hours into this issue. I think GitLab messed something up with command substitution (shows a new release yesterday), although I could be wrong about the issue or its timing. It also seems to only occur for some command substitutions and not others, I initially suspected it might be related to outputting to /dev/null, but wasn't going to dive too deep. It always failed immediately after the command substitution was initiated.
My code
I had code similar to yours (reduced version below), tried manipulating it multiple ways, but each use of command substitution yielded the same failure message:
Cleaning up file based variables 00:01
ERROR: Job failed: exit code 1
Attempts I've made include the following:
- folders=$(find .[^.]* * -type d -maxdepth 0 -exec echo {} \; 2>/dev/null)
- >
while read folder; do
echo "$folder"
done <<< "$folders"
And ...
- >
while read folder; do
echo "$folder"
done <<< $(find .[^.]* * -type d -maxdepth 0 -exec echo {} \; 2>/dev/null)
Both those versions succeeded on my local machine, but failed in GitLab (I might have typos in above - please don't scrutinize, it's reduced version of my actual program).
How I fixed it
Rather than using command substitution $(...), I instead opted for process substitution <(...) and it seems to be working without issue.
- >
while read folder; do
echo "$folder"
done < <(find .[^.]* * -type d -maxdepth 0 -exec echo {} \; 2>/dev/null)
I would try to substitute the same in your code if possible:
- >
while read dir; do
# the rest goes here
done < <(git log -m -1 --name-only -r --pretty="format:" "$CI_COMMIT_SHA")
The issue might also be the line inside the if statement (the echo), you can replace that with the following:
read SERVICE < <(echo "$dir")
Again, not exactly sure this will fix the issue for you as I'm still unsure what the cause is, but it resolved my issue. Best of luck.
The error seemed to vanish for me once I removed the script from .gitlab-ci.yml file to another scipt.sh file and called the script.sh file in the gitlab yaml.
We have run into the same issue in GitLab v13.3.6-ee with the following line of the script that we are using for the open new merge request:
COUNTBRANCHES=`echo ${LISTMR} | grep -o "\"source_branch\":\"${CI_COMMIT_REF_NAME}\"" | wc -l`;
and as #ctwheels stated, changing that line into this:
read COUNTBRANCHES < <(echo ${LISTMR} | grep -o "\"source_branch\":\"${CI_COMMIT_REF_NAME}\"" | wc -l);
solved our problem.
I had this error when tried to use protected CI/CD variable.
in my case, my script ended with a curl command to an URL that would return a 403 Forbidden and probably hang up
curl -s "$ENV_URL/hello" | grep "hello world"
... if that helps anyone :-)
It was a very specific use case for me (.NetCore), but it will eventually help someone.
In my case, no error was written in the logs and the tests where executed successfully but the job failed with the exist message showed in the question.
I was referencing xunit in my source project (not only in my test project) and I don't know why that causes the ci job to fail (but worked locally showing only a warning : Unable to find testhost.dll. Please publish your test project and retry).
Deleting xunit from my source project (not the test project) resolved the issue.
In my case I just had a conditionnal command, and if the last condition is false, then gitlab thinks the script has errored (even if it's not the case, because it uses the last line as the return value)
This is what my script looked like, and it would error if the project is using yarn but not npm
[ -f yarn.lock ] && yarn install --frozen-lockfile --cache .npm && yarn prod
[ ! -f yarn.lock ] && npm ci --prefer-offline --cache .npm && npm run prod --cache .npm
So the solution is just to make sure the last line returns true
[ -f yarn.lock ] && yarn install --frozen-lockfile --cache .npm && yarn prod
[ ! -f yarn.lock ] && npm ci --prefer-offline --cache .npm && npm run prod --cache .npm
true

Running a test in Questa from Makefile

I have written a Makefile to run tests in questasim. I am using the following commands.
vsim -l transcript -voptargs=+acc test -do $(WAVEDIR)/$(WAVE_FILE)
This helps to open the questa window and simulate the test case. With thin the questa console,I need to run "run -a" so that the complete test execution.
Is there any command which I can add inside my Makefile which will execute the testcase without using the questa console command.
Thanks in advance
Regards
S
Simply add a second -do option:
vsim -l transcript -voptargs=+acc test -do $(WAVEDIR)/$(WAVE_FILE) -do 'run -all'
Side note: be careful when using make with Modelsim or Questa. These tools are not parallel safe. If you try to run several compilations in parallel you will probably corrupt your target libraries and get strange errors.
So, if you use make to also compile, create the libraries, etc. you must guarantee that make will not try to launch in parallel several jobs modifying the same library or the same configuration file (e.g. modelsim.ini). With GNU make always add the .NOTPARALLEL: special target to your Makefiles (there are smarter and more efficient ways to handle this parallel problem with locks but this is out of scope).
Here I am posting the Makefile example below.
TEST = test_top_0006
WAVE_FILE = wave_test0006.do
PREFIX = europractice questa 10.6c
RTLDIR = ..
WAVEDIR = waves
TRANSCRIPT_FILE = transcript
GITBRANCH = feature
VSIM = $(PREFIX) vsim
VLOG = $(PREFIX) vlog
VCOM = $(PREFIX) vcom
VLIB = $(PREFIX) vlib
VLOG_OPTS = -suppress vlog-2583 +acc
compile: rtl tb
# compile all spu sources
spu = $(RTLDIR)/rtl/opcodeDefs_pkg.sv \
$(RTLDIR)/rtl/au/fp_pkg.sv \
$(RTLDIR)/rtl/spu.sv \
$(RTLDIR)/rtl/spu_top.sv
rtl:
if [ ! -d work ]; then $(VLIB) work; fi
${VLOG} -lint -work ${VLOG_OPTS} ${spu}
# compile verification environment
tb:
if [ ! -d work ]; then $(VLIB) work; fi
$(VLOG) $(VLOG_OPTS) $(RTLDIR)/rtl/typedefs_pkg.sv
$(VLOG) $(VLOG_OPTS) $(RTLDIR)/rtl/harness.sv
if [ ! -e "${TEST}.sv" ]; then false; fi
${VLOG} $(VLOG_OPTS) ${TEST}.sv
# run simulator in GUI mode
run:
${VSIM} -l $(TRANSCRIPT_FILE) test -do $(WAVEDIR)/$(WAVE_FILE) -do 'run -all'
runc: tb
${VSIM} -c -l $(TRANSCRIPT_FILE) -voptargs=+acc test -do $(WAVEDIR)/nogui.do
# GIT commands
push:
git push origin $(GITBRANCH)
pull:
git pull
commit:
git commit -a
stat:
git status
clean:
rm -rf work
rm -rf vsim.wlf
rm -f $(TRANSCRIPT_FILE)

Check format for Continous integration

I am trying to write a Makefile command that will output an error if the Go code is not correctly formatted. This is for a CI step. I am struggling with how to get it working in the make file. This solution works on the bash command line:
! gofmt -l . 2>&1 | read
But copying this into the Makefile:
ci-format:
#echo "$(OK_COLOR)==> Checking formatting$(NO_COLOR)"
#go fmt ./...
#! gofmt -l . 2>&1 | read
I get the following error:
/bin/sh: 1: read: arg count
These days, I use golangci-lint, which includes gofmt checking as an option.
But if for some reason you want to do this yourself, the command I previously used for precisely that purpose is:
diff -u <(echo -n) <(gofmt -d ./)
See, for example, the .travis.yml files on one of my projects.

Is it possible to post coverage for multiple packages to Coveralls?

I want to track test coverage on a go project using Coveralls, the instructions for the integration reference using
https://github.com/mattn/goveralls
cd $GOPATH/src/github.com/yourusername/yourpackage
$ goveralls your_repos_coveralls_token
However, this only posts the results for one package and running for packages in turn does not work as the final run overwrites all other runs. Has anyone figured out how to get coverage for multiple packages?
I ended up using this script:
echo "mode: set" > acc.out
for Dir in $(find ./* -maxdepth 10 -type d );
do
if ls $Dir/*.go &> /dev/null;
then
go test -coverprofile=profile.out $Dir
if [ -f profile.out ]
then
cat profile.out | grep -v "mode: set" >> acc.out
fi
fi
done
goveralls -coverprofile=acc.out $COVERALLS
rm -rf ./profile.out
rm -rf ./acc.out
It basically finds all the directories in the path and prints a coverage profile for them separately. It then concatenates the files into one big profile and ships them off to coveralls.
Taking Usman's answer, and altering it to support skipping Godep and other irrelevant folders:
echo "mode: set" > acc.out
for Dir in $(go list ./...);
do
returnval=`go test -coverprofile=profile.out $Dir`
echo ${returnval}
if [[ ${returnval} != *FAIL* ]]
then
if [ -f profile.out ]
then
cat profile.out | grep -v "mode: set" >> acc.out
fi
else
exit 1
fi
done
if [ -n "$COVERALLS_TOKEN" ]
then
goveralls -coverprofile=acc.out -repotoken=$COVERALLS_TOKEN -service=travis-pro
fi
rm -rf ./profile.out
rm -rf ./acc.out
Notice that instead of looking at every directory, I us the go list ./... command which lists all directories that actually get used to build the go package.
Hope that helps others.
** EDIT **
If you are using the vendor folder for Go v.1.6+ then this script filters out the dependencies:
echo "mode: set" > acc.out
for Dir in $(go list ./...);
do
if [[ ${Dir} != *"/vendor/"* ]]
then
returnval=`go test -coverprofile=profile.out $Dir`
echo ${returnval}
if [[ ${returnval} != *FAIL* ]]
then
if [ -f profile.out ]
then
cat profile.out | grep -v "mode: set" >> acc.out
fi
else
exit 1
fi
else
exit 1
fi
done
if [ -n "$COVERALLS_TOKEN" ]
then
goveralls -coverprofile=acc.out -repotoken=$COVERALLS_TOKEN -service=travis-pro
fi
Has anyone figured out how to get coverage for multiple packages?
Note: with Go 1.10 (Q1 2018), that... will actually be possible.
See CL 76875
cmd/go: allow -coverprofile with multiple packages being tested
You can see the implementation of a multiple package code coverage test in commit 283558e
Jeff Martin has since the release of Go 1.10 (Feb. 2018) confirmed in the comments:
go test -v -cover ./pkgA/... ./pkgB/... -coverprofile=cover.out gets a good profile and
go tool cover -func "cover.out" will get a total: (statements) 52.5%.
So it is working!
I have been using http://github.com/axw/gocov to get my code coverage.
I trigger this in a bash script, in here I call all my packages.
I also use http://github.com/matm/gocov-html to format into html.
coverage)
echo "Testing Code Coverage"
cd "${SERVERPATH}/package1/pack"
GOPATH=${GOPATH} gocov test ./... > coverage.json
GOPATH=${GOPATH} gocov-html coverage.json > coverage_report.html
cd "${SERVERPATH}/package2/pack"
GOPATH=${GOPATH} gocov test ./... > coverage.json
GOPATH=${GOPATH} gocov-html coverage.json > coverage_report.html
;;
Hope that helps a little bit.
Here is a pure GO solution:
I create a library that may help, https://github.com/bluesuncorp/overalls
all it does is recursively go through each directory ( aka each package ), run go test and produce coverprofiles, then merges all profiles into a single one at the root of the project directory called overalls.coverprofile
then you can use a tool like https://github.com/mattn/goveralls to send it to coveralls.io
hope everyone likes
In Go 1.13, following command generates coverage for multiple packages
go test -v -coverpkg=./... -coverprofile=profile.cov ./...
go tool cover -func profile.cov
for html report
go tool cover -html=profile.cov -o cover.html

How do I pipe something from the command line to a new Github gist?

I don't know if this exists yet, but I'd love to be able to do:
$ cat mygist.js | gh new gist
And have it return the URL (and possibly copy it to the clipboard / open it in the browser).
Seems like GitHub has a simple REST API, including methods for creating Gists. Just for fun:
$ curl -X POST \
--data-binary '{"files": {"file1.txt": {"content": "Hello, SO"}}}' \
https://api.github.com/gists
This successfully created this Gist. I guess it's enough to get you started.
Try this gem: https://github.com/defunkt/gist
Worked for me ^_^
Here's a simple bash script that takes a filename and makes it a gist.
function msg() {
echo -n '{"description":"","public":"false","files":{"file1.txt":{"content":"'
awk '{gsub(/"/,"\\\""); printf "%s\\n",$0}' "$1"
echo '"}}'
}
[ "$#" -ne 1 ] && echo "Syntax: gist.sh filename" && exit 1
[ ! -r "$1" ] && echo "Error: unable to read $1" && exit 2
msg "$1" | curl -v -d '#-' https://api.github.com/gists
FYI: gist replies with the post body, so if the file is big, perhaps grep just the relevant parts of the reply.
As Ronie said above, there is a gist gem which provides a gist command that you can use from your terminal to upload content to https://gist.github.com/
To upload the contents of a.rb just:
gist a.rb
More info http://defunkt.io/gist/
Have the same desire I found https://www.npmjs.com/package/gistup and fork the repository to https://github.com/CrandellWS/mkg because the developer did not want to support Windows which was the operating system being used at the time. So I reworked the npm package to work on windows as well as linux and apple...
Full source is available on GitHub:
https://github.com/CrandellWS/mkg
Installation is simple with npm
npm install -g mkg
Use is discribed on the npmjs package page:
https://www.npmjs.com/package/gistup
Once installed simply cd to which every directory you want to make a gist from...(remeber there are no subfolders with Gists)
and run the command:
mkg
and it will open your new gist in a broswer...additionally you will be able to control it like a normal git from there... just no subfolders...
https://stackoverflow.com/a/41233970/1815624
A super simple command I like to use for making gists out of diffs is:
git diff origin master -U15 | gist -t diff
Where 15 is the line spacing you can have before and after the change (So it's easier to see differences in larger files.)
-t is the type flag.

Resources