Is it possible to post coverage for multiple packages to Coveralls? - go

I want to track test coverage on a go project using Coveralls, the instructions for the integration reference using
https://github.com/mattn/goveralls
cd $GOPATH/src/github.com/yourusername/yourpackage
$ goveralls your_repos_coveralls_token
However, this only posts the results for one package and running for packages in turn does not work as the final run overwrites all other runs. Has anyone figured out how to get coverage for multiple packages?

I ended up using this script:
echo "mode: set" > acc.out
for Dir in $(find ./* -maxdepth 10 -type d );
do
if ls $Dir/*.go &> /dev/null;
then
go test -coverprofile=profile.out $Dir
if [ -f profile.out ]
then
cat profile.out | grep -v "mode: set" >> acc.out
fi
fi
done
goveralls -coverprofile=acc.out $COVERALLS
rm -rf ./profile.out
rm -rf ./acc.out
It basically finds all the directories in the path and prints a coverage profile for them separately. It then concatenates the files into one big profile and ships them off to coveralls.

Taking Usman's answer, and altering it to support skipping Godep and other irrelevant folders:
echo "mode: set" > acc.out
for Dir in $(go list ./...);
do
returnval=`go test -coverprofile=profile.out $Dir`
echo ${returnval}
if [[ ${returnval} != *FAIL* ]]
then
if [ -f profile.out ]
then
cat profile.out | grep -v "mode: set" >> acc.out
fi
else
exit 1
fi
done
if [ -n "$COVERALLS_TOKEN" ]
then
goveralls -coverprofile=acc.out -repotoken=$COVERALLS_TOKEN -service=travis-pro
fi
rm -rf ./profile.out
rm -rf ./acc.out
Notice that instead of looking at every directory, I us the go list ./... command which lists all directories that actually get used to build the go package.
Hope that helps others.
** EDIT **
If you are using the vendor folder for Go v.1.6+ then this script filters out the dependencies:
echo "mode: set" > acc.out
for Dir in $(go list ./...);
do
if [[ ${Dir} != *"/vendor/"* ]]
then
returnval=`go test -coverprofile=profile.out $Dir`
echo ${returnval}
if [[ ${returnval} != *FAIL* ]]
then
if [ -f profile.out ]
then
cat profile.out | grep -v "mode: set" >> acc.out
fi
else
exit 1
fi
else
exit 1
fi
done
if [ -n "$COVERALLS_TOKEN" ]
then
goveralls -coverprofile=acc.out -repotoken=$COVERALLS_TOKEN -service=travis-pro
fi

Has anyone figured out how to get coverage for multiple packages?
Note: with Go 1.10 (Q1 2018), that... will actually be possible.
See CL 76875
cmd/go: allow -coverprofile with multiple packages being tested
You can see the implementation of a multiple package code coverage test in commit 283558e
Jeff Martin has since the release of Go 1.10 (Feb. 2018) confirmed in the comments:
go test -v -cover ./pkgA/... ./pkgB/... -coverprofile=cover.out gets a good profile and
go tool cover -func "cover.out" will get a total: (statements) 52.5%.
So it is working!

I have been using http://github.com/axw/gocov to get my code coverage.
I trigger this in a bash script, in here I call all my packages.
I also use http://github.com/matm/gocov-html to format into html.
coverage)
echo "Testing Code Coverage"
cd "${SERVERPATH}/package1/pack"
GOPATH=${GOPATH} gocov test ./... > coverage.json
GOPATH=${GOPATH} gocov-html coverage.json > coverage_report.html
cd "${SERVERPATH}/package2/pack"
GOPATH=${GOPATH} gocov test ./... > coverage.json
GOPATH=${GOPATH} gocov-html coverage.json > coverage_report.html
;;
Hope that helps a little bit.

Here is a pure GO solution:
I create a library that may help, https://github.com/bluesuncorp/overalls
all it does is recursively go through each directory ( aka each package ), run go test and produce coverprofiles, then merges all profiles into a single one at the root of the project directory called overalls.coverprofile
then you can use a tool like https://github.com/mattn/goveralls to send it to coveralls.io
hope everyone likes

In Go 1.13, following command generates coverage for multiple packages
go test -v -coverpkg=./... -coverprofile=profile.cov ./...
go tool cover -func profile.cov
for html report
go tool cover -html=profile.cov -o cover.html

Related

gitlab runner throws "Cleaning up file based variables 00:01 ERROR: Job failed: exit code 1" at the end

Even though all my steps pass successfully , Gitlab CI shows this -
"Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1"
and fails the job at the very end . Also interestingly , this only happens for my master branch . It runs successfully on other branches. Has anyone faced this issue and found a resolution ?
- >
for dir in $(git log -m -1 --name-only -r --pretty="format:" "$CI_COMMIT_SHA"); do
if [[ -f "$dir" ]]; then
SERVICE=$(echo "$dir")
# helm install the service
fi
done
- echo "deployed"
Overview
This drove me crazy and I'm still not sure what the appropriate answer is. I just ran into this issue myself and sunk hours into this issue. I think GitLab messed something up with command substitution (shows a new release yesterday), although I could be wrong about the issue or its timing. It also seems to only occur for some command substitutions and not others, I initially suspected it might be related to outputting to /dev/null, but wasn't going to dive too deep. It always failed immediately after the command substitution was initiated.
My code
I had code similar to yours (reduced version below), tried manipulating it multiple ways, but each use of command substitution yielded the same failure message:
Cleaning up file based variables 00:01
ERROR: Job failed: exit code 1
Attempts I've made include the following:
- folders=$(find .[^.]* * -type d -maxdepth 0 -exec echo {} \; 2>/dev/null)
- >
while read folder; do
echo "$folder"
done <<< "$folders"
And ...
- >
while read folder; do
echo "$folder"
done <<< $(find .[^.]* * -type d -maxdepth 0 -exec echo {} \; 2>/dev/null)
Both those versions succeeded on my local machine, but failed in GitLab (I might have typos in above - please don't scrutinize, it's reduced version of my actual program).
How I fixed it
Rather than using command substitution $(...), I instead opted for process substitution <(...) and it seems to be working without issue.
- >
while read folder; do
echo "$folder"
done < <(find .[^.]* * -type d -maxdepth 0 -exec echo {} \; 2>/dev/null)
I would try to substitute the same in your code if possible:
- >
while read dir; do
# the rest goes here
done < <(git log -m -1 --name-only -r --pretty="format:" "$CI_COMMIT_SHA")
The issue might also be the line inside the if statement (the echo), you can replace that with the following:
read SERVICE < <(echo "$dir")
Again, not exactly sure this will fix the issue for you as I'm still unsure what the cause is, but it resolved my issue. Best of luck.
The error seemed to vanish for me once I removed the script from .gitlab-ci.yml file to another scipt.sh file and called the script.sh file in the gitlab yaml.
We have run into the same issue in GitLab v13.3.6-ee with the following line of the script that we are using for the open new merge request:
COUNTBRANCHES=`echo ${LISTMR} | grep -o "\"source_branch\":\"${CI_COMMIT_REF_NAME}\"" | wc -l`;
and as #ctwheels stated, changing that line into this:
read COUNTBRANCHES < <(echo ${LISTMR} | grep -o "\"source_branch\":\"${CI_COMMIT_REF_NAME}\"" | wc -l);
solved our problem.
I had this error when tried to use protected CI/CD variable.
in my case, my script ended with a curl command to an URL that would return a 403 Forbidden and probably hang up
curl -s "$ENV_URL/hello" | grep "hello world"
... if that helps anyone :-)
It was a very specific use case for me (.NetCore), but it will eventually help someone.
In my case, no error was written in the logs and the tests where executed successfully but the job failed with the exist message showed in the question.
I was referencing xunit in my source project (not only in my test project) and I don't know why that causes the ci job to fail (but worked locally showing only a warning : Unable to find testhost.dll. Please publish your test project and retry).
Deleting xunit from my source project (not the test project) resolved the issue.
In my case I just had a conditionnal command, and if the last condition is false, then gitlab thinks the script has errored (even if it's not the case, because it uses the last line as the return value)
This is what my script looked like, and it would error if the project is using yarn but not npm
[ -f yarn.lock ] && yarn install --frozen-lockfile --cache .npm && yarn prod
[ ! -f yarn.lock ] && npm ci --prefer-offline --cache .npm && npm run prod --cache .npm
So the solution is just to make sure the last line returns true
[ -f yarn.lock ] && yarn install --frozen-lockfile --cache .npm && yarn prod
[ ! -f yarn.lock ] && npm ci --prefer-offline --cache .npm && npm run prod --cache .npm
true

How can I show cppcheck output in Xcode?

I want to display cppcheck output directly in the Xcode Issue Navigator. How can I do that?
Here is a simple script that you can add as a Run Script Phase in your Build Phases in Xcode:
#!/bin/bash
srcDir=src
if which cppcheck >/dev/null; then
cppcheck -j 4 --enable=all --inline-suppr $srcDir 2>cppcheck.txt 1>/dev/null
pwd=$(pwd)
sed "s|\[|${pwd}/|" cppcheck.txt | sed 's|\]: |: warning: |'
rm cppcheck.txt
else
echo "warning: cppcheck not installed, install here: http://brewformulas.org/Cppcheck"
fi

Codeclimate test coverage formatter for Golang

Nowhere in Codeclimate docs written how to specify coverage formatter. But when I'm trying to send coverage to Codeclimate:
./cc-test-reporter before-build
./cc-test-reporter after-build
It is failing:
Error: could not find any viable formatter. available formatters: simplecov, lcov, coverage.py, clover, gocov, gcov, cobertura, jacoco
I have gocov installed. Also I generated a report with goconv:
gocov test -coverprofile=out
And I tried to specify the report file to Codeclimate in various ways:
./cc-test-reporter after-build out
./cc-test-reporter after-build < out
But had no luck...
I haven't found any formatter related directives for .codeclimate.yml file. The doc is written in super "you know" style so it didn't help. How to enable/send test coverage with Codeclimate?
Export var:
CC_TEST_REPORTER_ID=...
Run:
for pkg in $(go list ./... | grep -v vendor); do
go test -coverprofile=$(echo $pkg | tr / -).cover $pkg
done
echo "mode: set" > c.out
grep -h -v "^mode:" ./*.cover >> c.out
rm -f *.cover
./cc-test-reporter after-build
Abby from Code Climate Support here. Currently, the test-reporter tries to infer where the coverage data could be from a set of default paths. This is usually enough for users following the default setup of the coverage tool.
But, in the case where the output is not located on one of those paths, you can use the test-reporter low-level commands to indicate the
type of coverage data and where it is. In this particular case you would do something like:
export CC_TEST_REPORTER_ID=<repo-test-reporter-id>
./cc-test-reporter before-build
gocov test -coverprofile=out
./cc-test-reporter format-coverage --input-type gocov out
./cc-test-reporter upload-coverage
You can check more on how to use a test-reporter command by using the flag --help. For example, ./cc-test-reporter format-coverage --help.
That should get you in a good place. If not, let us know at hello#codeclimate.com or here, and I can get more insight. :)
For those of you that still have this problem:
For me - the problem was that I had incorrectly named the output coverage file as "cp.out" instead of "c.out" which is what the cc-test-reporter expects.
Here's my working script:
#!/bin/sh
inputFile=$1
while IFS="=" read -r repo reporterID
do
cd "$HOME""/go/src/github.com/""$repo" || exit
echo "performing code coverage upload for ""$repo"
git checkout master && git pull ##don't know if main is used
cp -f "$HOME"/shellScripts/test-reporter-latest-darwin-amd64 .
chmod 755 test-reporter-latest-darwin-amd64
./test-reporter-latest-darwin-amd64 before-build
export CC_TEST_REPORTER_ID="$reporterID"
go test -coverprofile=c.out ./...
./test-reporter-latest-darwin-amd64 after-build --prefix "github.com/""$repo"
echo
echo "======================="
echo
done < "$inputFile"

Run wget and other commands in shell script

I'm trying to create a shell script that I will download the latest Atomic gotroot rules to my server, unpack them, copy them to the correct folder, etc.,
I've been reading shell tutorials and forum posts for most of the day and the syntax escapes me for some of these. I have run all these commands and I know they work if I manually run them.
I know I need to develop some error checking, but I'm just trying to get the commands to run correctly. The main problem at the moment is the syntax of the wget commands, i've got errors about missing semi-colons, divide by zero, unsupported schemes - I've tried various quoting (single and double) and escaping - / " characters in various combinations.
Thanks for any help.
The raw wget command is
wget --user="jim" --password="xxx-yyy-zzz" "http://updates.atomicorp.com/channels/rules/subscription/VERSION"
#!/bin/sh
update_modsec_rules(){
wget=/usr/bin/wget
tar=/bin/tar
apachectl=/usr/bin/apache2ctl
TXT="Script Run Finished"
WORKING_DIR="/var/asl/updates"
TARGET_DIR="/usr/local/apache/conf/modsec_rules/"
EXISTING_FILES="/var/asl/updates/modsec/*"
EXISTING_ARCH="/var/asl/updates/modsec-*"
WGET_OPTS='--user=jim --password=xxx-yyy-zzz'
URL_BASE="http://updates.atomicorp.com/channels/rules/subscription"
# change to working directory and cleanup any downloaded files and extracted rules in modsec/ directory
cd $WORKING_DIR
rm -f $EXISTING_ARCH
rm -f $EXISTING_FILES
rm -f VERSION*
# wget to download VERSION file
$wget ${WGET_OPTS} "${URL_BASE}/VERSION"
# get current MODSEC_VERSION from VERSION file and save as variable
source VERSION
TARGET_DATE=$MODSEC_VERSION
echo $TARGET_DATE
# wget to download current archive
$wget ${WGET_OPTS} "${URL_BASE}/modsec-${TARGET_DATE}.tar.gz"
# extract archive
echo "extracting files . . . "
tar zxvf $WORKING_DIR/modsec-${TARGET_DATE}.tar.gz
echo "copying files . . . "
cp -uv $EXISTING_FILES $TARGET_DIR
echo $TXT
}
update_modsec_rules $# 2>&1 | tee -a /var/asl/modsec_update.log
RESTART_APACHE="/usr/local/cpanel/scripts/restartsrv httpd"
$RESTART_APACHE
Here are some guidelines to use when writing shell scripts.
Always quote variables when you use them. This helps avoid the possibility of misinterpretation. (What if a filename contains a space?)
Don't trust fileglobbing on commands like rm. Use for loops instead. (What if a filename starts with a hyphen?)
Avoid subshells when possible. Your lines with backquotes make me itchy.
Don't exec if you can help it. And especially don't expect any parts of your script after your exec to actually get run.
I should point out that while your shell may be bash, you've specified /bin/sh for execution of this script, so it is NOT a bash script.
Here's a rewrite with some error checking. Add salt to taste.
#!/bin/sh
# Linux
wget=/usr/bin/wget
tar=/bin/tar
apachectl=/usr/sbin/apache2ctl
# FreeBSD
#wget=/usr/local/bin/wget
#tar=/usr/bin/tar
#apachectl=/usr/local/sbin/apachectl
TXT="GOT TO THE END, YEAH"
WORKING_DIR="/var/asl/updates"
TARGET_DIR="/usr/local/apache/conf/modsec_rules/"
EXISTING_FILES_DIR="/var/asl/updates/modsec/"
EXISTING_ARCH="/var/asl/updates/"
URL_BASE="http://updates.atomicorp.com/channels/rules/subscription"
WGET_OPTS='--user="jim" --password="xxx-yyy-zzz"'
if [ ! -x "$wget" ]; then
echo "ERROR: No wget." >&2
exit 1
elif [ ! -x "$apachectl" ]; then
echo "ERROR: No apachectl." >&2
exit 1
elif [ ! -x "$tar" ]; then
echo "ERROR: Not in Kansas anymore, Toto." >&2
exit 1
fi
# change to working directory and cleanup any downloaded files
# and extracted rules in modsec/ directory
if ! cd "$WORKING_DIR"; then
echo "ERROR: can't access working directory ($WORKING_DIR)" >&2
exit 1
fi
# Delete each file in a loop.
for file in "$EXISTING_FILES_DIR"/* "$EXISTING_ARCH_DIR"/modsec-*; do
rm -f "$file"
done
# Move old VERSION out of the way.
mv VERSION VERSION-$$
# wget1 to download VERSION file (replaces WGET1)
if ! $wget $WGET_OPTS $URL_BASE}/VERSION; then
echo "ERROR: can't get VERSION" >&2
mv VERSION-$$ VERSION
exit 1
fi
# get current MODSEC_VERSION from VERSION file and save as variable,
# but DON'T blindly trust and run scripts from an external source.
if grep -q '^MODSEC_VERSION=' VERSION; then
TARGET_DATE="`sed -ne '/^MODSEC_VERSION=/{s/^[^=]*=//p;q;}' VERSION`"
echo "Target date: $TARGET_DATE"
fi
# Download current archive (replaces WGET2)
if ! $wget ${WGET_OPTS} "${URL_BASE}/modsec-$TARGET_DATE.tar.gz"; then
echo "ERROR: can't get archive" >&2
mv VERSION-$$ VERSION # Do this, don't do this, I don't know your needs.
exit 1
fi
# extract archive
if [ ! -f "$WORKING_DIR/modsec-${TARGET_DATE}.tar.gz" ]; then
echo "ERROR: I'm confused, where's my archive?" >&2
mv VERSION-$$ VERSION # Do this, don't do this, I don't know your needs.
exit 1
fi
tar zxvf "$WORKING_DIR/modsec-${TARGET_DATE}.tar.gz"
for file in "$EXISTING_FILES_DIR"/*; do
cp "$file" "$TARGET_DIR/"
done
# So far so good, so let's restart apache.
if $apachectl configtest; then
if $apachectl restart; then
# Success!
rm -f VERSION-$$
echo "$TXT"
else
echo "ERROR: PANIC! Apache didn't restart. Notify the authorities!" >&2
exit 3
fi
else
echo "ERROR: Apache configs are broken. We're still running, but you'd better fix this ASAP." >&2
exit 2
fi
Note that while I've rewritten this to be more sensible, there is certainly still a lot of room for improvement.
You have two options:
1- changing this to
WGET1=' --user="jim" --password="xxx-yyy-zzz" "http://updates.atomicorp.com/channels/rules/subscription/VERSION"'
then run
wget $WGET1 same to WGET2
Or
2- encapsulating $WGET1 with backquotes ``.
e.g.:
`$WGET`
This applies to any command your executing out of a variable.
Suggested changes:
#!/bin/sh
TXT="GOT TO THE END, YEAH"
WORKING_DIR="/var/asl/updates"
TARGET_DIR="/usr/local/apache/conf/modsec_rules/"
EXISTING_FILES="/var/asl/updates/modsec/*"
EXISTING_ARCH="/var/asl/updates/modsec-*"
WGET1='wget --user="jim" --password="xxx-yyy-zzz" "http://updates.atomicorp.com/channels/rules/subscription/VERSION"'
WGET2='wget --user="jim" --password="xxx-yyy-zzz" "http://updates.atomicorp.com/channels/rules/subscription/modsec-$TARGET_DATE.tar.gz"'
## change to working directory and cleanup any downloaded files and extracted rules in modsec/ directory
cd $WORKING_DIR
rm -f $EXISTING_ARCH
rm -f $EXISTING_FILES
## wget1 to download VERSION file
`$WGET1`
## get current MODSEC_VERSION from VERSION file and save as variable
source VERSION
TARGET_DATE=`echo $MODSEC_VERSION`
## WGET2 command to download current archive
`$WGET2`
## extract archive
tar zxvf $WORKING_DIR/modsec-$TARGET_DATE.tar.gz
cp $EXISTING_FILES $TARGET_DIR
## restart server
exec '/usr/local/cpanel/scripts/restartsrv_httpd' $*;
Pro Tip: If you need string substitution, using ${VAR} is much better to eliminate ambiguity, e.g.:
tar zxvf $WORKING_DIR/modsec-${TARGET_DATE}.tar.gz

svn checkout to deploy via shell

I have the following problem. I need to organize automatic upload to deploy server from svn repository, but with some feautures.
There is how I wrote it:
# $1 - project; $2 - version (optional)
# rm -rf $projectDir
if [ "$2" == '' ]; then
svn export $trunk $projectDir --force >> $log
version=`svn info $trunk | grep Revision | awk '{print$2}'`
svn copy $trunk $tags/$version -m "created while uploading last version of $1"
echo "New stable version #$version of $1 is created
Uploading to last version is completed successfully"
else
version=$2
svn export $tags/$version/ $projectDir --force >> $log
echo "Revert to version #$version is completed successfully"
fi
echo $version > $projectDir/version
chown -R $1:$1 $projectDir
But svn export doesn't delete deleted via svn files, so I need to clean directory before export every time. It's not good.
Before this, I work with checkout for deploy like this:
svn co $trunk >> $log
cp -ruf trunk/* $projectDir
svn info $trunk | grep Revision > $projectDir/version
chown -R $project:$project $projectDir
echo "uploading finished"
This work very well and very very faster (it changes only changed files) than the export, but:
without automatic tag creating;
without opportunity for nice reverting.
In my last script co doesn't work, because it trying to checkout in one directory from different repository directories (trunk/some tag), which isn't real.
So, question:
Can I relocate project before checkout?
Can I find the diff with co version and existing version before export?
What can I do with diff result? (remove unneeded files after export?)
Thanks in advance.
Have you evaluated Capistrano? It can do a lot of what you're trying to achieve.
For the basis for the solution was taken following code:
It's simpler and fully solves the problem as for me.
if [ "$2" == '' ]; then
version=`svn info ${trunk} | grep Revision | awk '{print$2}'`
if [ `cat ${projectWWW}/version` == "${version}" ]; then
resultMessage="Project is up to date"
else
svn co ${trunk} ${projectRoot}/co >> ${log}
cp -ruf ${projectRoot}/co/ ${projectRoot}/releases/${version}
chown -R $1:$1 ${projectRoot}/releases/${version}
resultMessage="New stable version #$version of $1 is created
Uploading to last version is completed successfully"
fi
else
version=$2
resultMessage="Revert to version #$version is completed successfully"
fi
ln -s ${projectRoot}/releases/${version} ${projectWWW}
echo ${version} > ${projectWWW}/version
echo ${resultMessage} >> ${log}

Resources