tl;dr - SonarCloud CI on GitHub actions warns that it can't find any of the source files with coverage reported, despite confirming that the files are in the docker filesystem at the path reported.
I have a Ruby / Rails app with rspec specs which produce coverage stats using SimpleCov and its JSON formatter (so my rails_helper.rb starts:
require 'simplecov'
require "simplecov_json_formatter"
SimpleCov.formatter = SimpleCov::Formatter::JSONFormatter
SimpleCov.start('rails') do
add_filter ['/channels/', '/jobs/', '/mailers/']
end
I have SonarCloud CI set up to scan using GitHub Actions, with the following sonar-project.properties in the root:
sonar.projectKey=asilano_my-app
sonar.organization=asilano
sonar.ruby.coverage.reportPaths=coverage/coverage.json
# Path is relative to the sonar-project.properties file. Replace "\" by "/" on Windows.
sonar.sources=app,lib
sonar.tests=spec
and the following GitHub workflow:
name: Test and Deploy
on:
pull_request:
types: [opened, synchronize, reopened]
branches:
- 'main'
- 'staging'
push:
branches:
- 'main'
- 'staging'
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- uses: actions/checkout#v2
- uses: ruby/setup-ruby#v1
with:
bundler-cache: true
- name: Install PostgreSQL client
run: |
sudo apt-get -yqq install libpq-dev
- name: Build App
env:
PGHOST: localhost
PGUSER: postgres
PGPASSWORD: postgres
RAILS_ENV: test
RAILS_MASTER_KEY: ${{ secrets.TEST_MASTER_KEY }}
run: |
bin/rails db:setup
yarn install
- name: Run Tests
env:
PGHOST: localhost
PGUSER: postgres
PGPASSWORD: postgres
RAILS_ENV: test
RAILS_MASTER_KEY: ${{ secrets.TEST_MASTER_KEY }}
run: |
bundle exec rspec
- name: Where Am I?
run: |
head coverage/coverage.json
ls -l /home/runner/work/my-app/my-app/app/lib/some_file.rb
- name: SonarCloud Scan
uses: SonarSource/sonarcloud-github-action#master
env:
GITHUB_TOKEN: ${{ secrets.SONAR_GITHUB_TOKEN }} # Needed to get PR information, if any
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
(main and staging are both long-lasting branches in SonarCloud)
The Where Am I? step is to try and debug the problems I'm having. It shows that the top of coverage.json reads:
{
"meta": {
"simplecov_version": "0.21.2"
},
"coverage": {
"/home/runner/work/my-app/my-app/app/lib/some_file.rb": {
"lines": [
1,
1,
1,
and confirms via ls that the mentioned path exists:
-rw-r--r-- 1 runner docker 1729 Oct 24 08:15 /home/runner/work/my-app/my-app/app/lib/some_file.rb
However, the SonarCloud scan step warns that the coverage file mentions some_file.rb, but can't find it in the filesytem:
INFO: Sensor SimpleCov Sensor for Ruby coverage [ruby]
WARN: File '/home/runner/work/my-app/my-app/app/lib/some_file.rb' is present in coverage report but cannot be found in filesystem
...and then repeating for every file in the app.
Why not? Why can't the SonarCloud scanner find some_file.rb on the path reported in the coverage file, even though I've confirmed it's where it should be?
I had the same issue with GitHub actions, rails, and simplecov. You need to replace the paths generated by simplecov on coverage/coverage.json. To do this run this step before your sonarcloud scan. Also, you could check this post https://community.sonarsource.com/t/code-coverage-doesnt-work-with-github-action/16747
- name: Running rails tests
run: bundle exec rspec
- name: fix code coverage paths
working-directory: ./coverage
run: |
sed -i 's#'$GITHUB_WORKSPACE'#/github/workspace/#g' coverage.json
- name: SonarCloud Scan
uses: SonarSource/sonarcloud-github-action#master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
Related
I have a github action which runs the php unit test followed by Sonarqube scanner but the Sonarqube code coveage is always 0%
Phpunit Test is ok
Sonaiqube ok as well but no CodeCoverage
These is my Github action script eliminated some jobs related to unitest in here :
name: front-data-stage-unittest
on:
pull_request:
branches: [ master ]
jobs:
Test:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
php: ['8.1']
name: PHP ${{ matrix.php }}
steps:
- name: Checkout repository and submodules
uses: actions/checkout#v3
with:
submodules: recursive
token: ${{ secrets.SUBMODULE_TOKEN }}
- name: Install PHP
uses: shivammathur/setup-php#master
with:
php-version: ${{ matrix.php }}
extensions: mbstring, dom, fileinfo, mysql
coverage: xdebug
- uses: php-actions/composer#v5
with:
php_version: 8.1
args: --profile --ignore-platform-reqs --optimize-autoloader
- name: Execute PHPUnit tests
run: vendor/bin/phpunit --coverage-clover=coverage.xml
- name: SonarQube Scan
uses: SonarSource/sonarqube-scan-action#master
with:
args: >
-Dsonar.php.coverage.reportPaths=coverage.xml
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
sonar-project.properties:
sonar.projectKey=tara_app
sonar.php.coverage.reportPaths=coverage.xml
Possible cause:
looking at the logs looks like its not finding the file but anyone worked on generating or making it work via github action
14:06:11.466 INFO: 1157/1193 files analyzed, current file: app/Http/Controllers/V2/PerformanceController.php
14:06:13.823 INFO: 1193/1193 source files have been analyzed
14:06:13.825 WARN: PHPUnit xml test report not found: tests/report/test.xml
14:06:13.826 INFO: No PHPUnit coverage reports provided (see 'sonar.php.coverage.reportPaths' property)
14:06:13.826 INFO: Sensor PHP sensor [php] (done) | time=88263ms
14:06:13.826 INFO: Sensor Analyzer for "php.ini" files [php]
The filesystem is not preserved between jobs. Thus your reports generated from your Test jobs are not available to your run-sonarqube job.
If you want to share files between jobs you need to use artifacts.
After your phpunit step add a step using actions/upload-artifact and then add a step before your scan using actions/download-artifact to pull the report into that job.
I'm attempting to create a GHA workflow and I am getting an error that I'm unsure how to fix as I've implemented this in similar environments before.
name: Deploy Staging
# Controls when the workflow will run
on:
# Triggers the workflow on push events only for the main branch
push:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
# Run the build job first
build:
name: Build
uses: ./.github/workflows/build.yml
deploy-staging:
name: Staging Deploy
runs-on: ubuntu-latest
environment:
name: staging
needs: [build]
permissions:
id-token: write
contents: read
steps:
- uses: actions/setup-node#v3
with:
node-version: '14'
- name: Download build artifacts
uses: actions/download-artifact#v3
with:
name: buildResult
- name: CDK install
run: npm install -g aws-cdk
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: XXXX
aws-region: us-east-1
- name: CDK diff
run: cdk --app . diff staging
- name: CDK deploy
run: cdk --app . deploy staging --require-approval never
- name: Configure DX AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: XXXX
aws-region: us-east-1
role-session-name: "${{ github.actor }}"
- name: Report deployment
uses: XXXX/deployment-tracker-action#v1
if: always()
with:
application-name: XXXX
environment: staging
platform: test
deployment-status: ${{ steps.deploy-workload.outcome == 'success' && 'success' || 'fail' }}
aws-region: us-east-1
XXXX
I don't understand quite where I'm going wrong here but when I merged my actions branch and I attempted to get it to work, I received the following message:
error parsing called workflow "./.github/workflows/build.yml": workflow is not reusable as it is missing a `on.workflow_call` trigger
Below is my build file for reference.
name: Build
# Controls when the workflow will run
on:
pull_request:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
inputs:
buildEnvironment:
description: Build Environment
required: false
default: production
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# next build runs lint, don't need a step for it
build:
name: Build
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout#v3
- uses: actions/setup-node#v3
with:
node-version: '14'
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: XXXX
aws-region: us-east-1
role-session-name: "${{ github.actor }}"
- name: Install Dependencies
run: npm install
- name: CDK install
run: npm install -g aws-cdk
- name: CDK build
run: cdk synth
- name: Upload build artifacts
uses: actions/upload-artifact#v3
with:
name: buildResult
path: |
cdk.out
test:
name: Test
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v3
- uses: actions/setup-node#v3
with:
node-version: '14'
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: XXXX
aws-region: us-east-1
role-session-name: "${{ github.actor }}"
- name: Install Dependencies
run: npm install
- name: Run tests
run: npm test
If you want to call another workflow (reusable workflow), the workflow you're calling needs to have the trigger workflow_call.
Therefore, in order to resolve your error, change build.yml to:
name: Build
on:
workflow_call:
pull_request:
# etc..
I have my front-end (repUI) and cypress tests (repTest) in separate repositories., I am trying to run the tests by checkout both from repTest. Once the repUI is set up (install and build), the localhost:8000 is ready for tests to run.
So far the following workflow is designed:
name: Cypress E2E Tests on Dev
on:
workflow_dispatch:
push:
branches-ignore: [ main ]
schedule:
- cron: '0 0 * * 1-5'
jobs:
setup:
runs-on: ubuntu-20.04
strategy:
matrix:
node-version: [14.x]
steps:
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
# Checkout UI repo
- name: Checkout voy-frontend repo
uses: actions/checkout#v2
with:
repository: orgName/xyz-frontend
path: xyz-frontend
ssh-key: ${{ secrets.GIT_SSH_KEY }}
# Install voy-f dependancies
- name: Install dependancies for application
run: |
cd voy-frontend
pwd
npm install
- name: Get npm cache directory
id: npm-cache-dir
run: |
echo "::set-output name=dir::$(npm config get cache)"
- uses: actions/cache#v2
id: npm-cache # use this to check for `cache-hit` ==> if: steps.npm-cache.outputs.cache-hit != 'true'
with:
path: ${{ steps.npm-cache-dir.outputs.dir }}
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
# Run application
- name: Run application
run: |
pwd
cd voy-frontend
npm run serve
cypress-test:
name: Test
needs: setup
runs-on: ubuntu-latest
steps:
# Checkout test automation repo
- name: git checkout
uses: actions/checkout#v2
with:
ref: main
# Install cypress dependancies
- name: Cypress install
uses: cypress-io/github-action#v2
# Run cypress tests
- name: Cypress run - localhost
run: npm run cy:r -- --env TENANT=${{ secrets.TENANT }},CLIENT_ID=${{ secrets.CLIENT_ID }},CLIENT_SECRET=${{ secrets.CLIENT_SECRET }},ORG_SCOPE=${{ secrets.ORG_SCOPE }},G_SCOPE=${{ secrets.G_SCOPE }} --spec "cypress/integration/specs/Search.feature" --headless --browser chrome
- uses: actions/upload-artifact#v2
if: failure()
with:
name: cypress-videos
path: |
cypress/reports
cypress/videos
cypress/screenshots
retention-days: 1
The problem is if I run the build command npm run serve for repUI, the applications successfully getting started but it is not proceeding further instead it just waits. If I comment out the npm run serve, all other jobs are running as it should.
Here is the screenshot:
For building, it only took ~60 seconds, and then it waits until I cancel it. Same case if I trigger the job again and always.
What needs to be done in order to proceed to the cypress-test job once the build is ready.?
I got a quick question and I can't find docs on it (maybe it is impossible)
How can I retrieve what has been built from a container (like this) :
prod-dependencies:
name: Production Dependencies
runs-on: ubuntu-20.04
container: elixir:1.11-alpine
env:
MIX_ENV: prod
steps:
- name: Checkout
uses: actions/checkout#v2
with:
fetch-depth: 0
- name: Retrieve Cached Dependencies
uses: actions/cache#v2
id: mix-prod-cache
with:
path: |
${{ github.workspace }}/deps
${{ github.workspace }}/_build
key: PROD-${{ hashFiles('mix.lock') }}
- name: Install Dependencies
if: steps.mix-prod-cache.outputs.cache-hit != 'true'
run: |
apk add build-base rust cargo
mix do local.hex --force, local.rebar --force, deps.get --only prod, deps.compile
I need to build on Alpine since I am deploying on Alpine and some C libs are different for the Erlang VM to work.
So here it is the place where I build dependencies and I want to put them on cache for the subsequent jobs but the cache is never populated. According to doc GitHub mount a volume with containers to retrieve artifacts but I must miss use it.
Thanks a lot for your help.
Here is a sample Github workflow for an Elixir app with a Postgres database that utilizes the action/cache#v2. Note that it defines 2 separate steps for restoring the deps and _build directories. In this workflow, the mix deps.get and mix compile steps are always run: they should execute very quickly if the cache is restored.
Adapted from this workflow
name: Test
on:
pull_request:
branches:
- develop
paths-ignore:
- 'docs/**'
- '*.md'
jobs:
test:
name: Lint and test
runs-on: ubuntu-18.04
env:
MIX_ENV: test
services:
db:
image: postgres:11
ports: ['5432:5432']
env:
POSTGRES_PASSWORD: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout#v2
- uses: erlef/setup-elixir#v1
with:
otp-version: '23.1.2'
elixir-version: '1.11.2'
- name: Configure SSH for private Repos
uses: webfactory/ssh-agent#v0.4.1
with:
ssh-private-key: ${{ secrets.HEADLESS_PRIV }}
- name: Restore the deps cache
uses: actions/cache#v2
id: deps-cache-restore
with:
path: deps
key: ${{ runner.os }}-deps-mixlockhash-${{ hashFiles(format('{0}{1}', github.workspace, '/mix.lock')) }}
restore-keys: |
${{ runner.os }}-deps-
- name: Restore the _build cache
uses: actions/cache#v2
id: build-cache-restore
with:
path: _build
key: ${{ runner.os }}-build-mixlockhash-${{ hashFiles(format('{0}{1}', github.workspace, '/mix.lock')) }}
restore-keys: |
${{ runner.os }}-build-
- name: Get/update Mix dependencies
run: |
mix local.hex --force
mix local.rebar
mix deps.get
- name: Compile
run: mix compile
- name: Create database
run: mix ecto.create
- name: Migrate database
run: mix ecto.migrate
- name: Test
run: mix test
I want to create a Github workflow that does the following:
test my code with pytest
trigger Sonar Qube Cloud to analyze to the code and show my test coverage!
As far as I understand, SonarQ needs a file coverage.xml to display the code coverage. This can be generated with
pytest --cov=./ --cov-report=xml --doctest-modules
According to this article coverage.xml should be available under /github/workspace/coverage.xml.
Thus, I specify my sonar-project.properties in the root folder of the project:
sonar.organization=pokemate
sonar.projectKey=PokeMate_name-generator
sonar.sources=.
sonar.python.coverage.reportPath=/github/workspace/coverage.xml
my actions file build.yml:
on:
push:
branches:
- master
- develop
- sonar-qube-setup
jobs:
build:
runs-on:
- ubuntu-latest
steps:
# Checkout repo
- uses: actions/checkout#v2
# Dependencies
- name: Set up Python 3.7
uses: actions/setup-python#v1
with:
python-version: 3.7
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
# Test
- name: Test with pytest
run: |
pytest --cov=./ --cov-report=xml --doctest-modules
# Sonar Qube
- name: SonarCloud Scan
uses: sonarsource/sonarcloud-github-action#master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
However, on SonarQ it still shows 0% test coverage, which is probably because it cannot find the coverage.xml. Any idea how to make this work?
The error came from the missing s in reportPaths in the sonar-project.properties file.