Why do I have 0% coverage on New Code in SonarQube? - sonarqube

I created a pull-request in gitlab containing the following python code (in my_file.py):
def is_significant(displayed_a, conversions_a, displayed_b, conversions_b, tails=1, confidence=95):
# perform some computation (elided) and return True or False
return res_as_bool
and the following test in tests/test_my_file.py, which runs fine using pytest:
def test_significant_1():
assert not is_significant(displayed_a=80000, conversions_a=1600, displayed_b=80000, conversions_b=1693, tails=1, confidence=95)
Why does SonarQube mark the line containing assert as both Lines to Cover and Uncovered Lines?
Also, is it related to this warning also raised by SonarQube?
Unable to decorate the pull request. Please configure the pull request properties in the project administration.

Related

Customising Junit5 test output via Gradle

I'm trying to output BDD from my junit tests like the following: -
Feature: Adv Name Search
Scenario: Search by name v1
Given I am at the homepage
When I search for name Brad Pitt
And I click the search button2
Then I expect to see results with name 'Brad Pitt'
When running in IntelliJ IDE, this displays nicely but when running in Gradle nothing is displayed. I did some research and enabled the test showStandardStreams boolean i.e.
In my build.gradle file I've added ...
test {
useJUnitPlatform()
testLogging {
showStandardStreams = true
}
}
This produces ...
> Task :test
Adv Name Search STANDARD_OUT
Feature: Adv Name Search
Tests the advanced name search feature in IMDB
Adv Name Search > Search by name v1 STANDARD_OUT
Scenario: Search by name v1
Given I am at the homepage
When I search for name Brad Pitt
And I click the search button2
Then I expect to see results with name 'Brad Pitt'
... which is pretty close but I don't really want to see the output from gradle (the lines with STANDARD_OUT + extra blank lines).
Adv Name Search STANDARD_OUT
Is there a way to not show the additional Gradle logging in the test section?
Or maybe my tests shouldn't be using System.out.println at all, but rather use proper logging (eg. log4j) + gradle config to display these?
Any help / advice is appreciated.
Update (1)
I've created a minimum reproducable example at https://github.com/bobmarks/stackoverflow-junit5-gradle if anyone wants to quickly clone and ./gradlew clean test.
You can replace your test { … } configuration with the following to get what you need:
test {
useJUnitPlatform()
systemProperty "file.encoding", "utf-8"
test {
onOutput { descriptor, event ->
if (event.destination == TestOutputEvent.Destination.StdOut) {
logger.lifecycle(event.message.replaceFirst(/\s+$/, ''))
}
}
}
}
See also the docs for onOutput.
FWIW, I had originnaly posted the following (incomplete) answer which turned out to be focusing on the wrong approach of configuring the test logging:
I hardly believe that this is possible. Let me try to explain why.
Looking at the code which produces the lines that you don’t want to see, it doesn’t seem possible to simply configure this differently:
Here’s the code that runs when something is printed to standard out in a test.
The method it calls next unconditionally adds the test descriptor and event name (→ STANDARD_OUT) which you don’t want to see. There’s no way to switch this off.
So changing how standard output is logged can probably not be changed.
What about using a proper logger in the tests, though? I doubt that this will work either:
Running tests basically means running some testing tool – JUnit 5 in your case – in a separate process.
This tool doesn’t know anything/much about who runs it; and it probably shouldn’t care either. Even if the tool should provide a logger or if you create your own logger and run it as part of the tests, then the logger still has to print its log output somewhere.
The most obvious “somewhere” for the testing tool process is standard out again, in which case we wouldn’t win anything.
Even if there was some interprocess communication between Gradle and the testing tool for exchanging log messages, then you’d still have to find some configuration possibility on the Gradle side which configures how Gradle prints the received log messages to the console. I don’t think such configuration possibility (let alone the IPC for log messages) exists.
One thing that can be done is to set the displayGranuality property in testLogging Options
From the documentation
"The display granularity of the events to be logged. For example, if set to 0, a method-level event will be displayed as "Test Run > Test Worker x > org.SomeClass > org.someMethod". If set to 2, the same event will be displayed as "org.someClass > org.someMethod".

For reporting testcases to Xray we are using Karate Runtime Hooks issue here is testcases are not getting executed while using hooks & hooks code ran [duplicate]

Strange behaviour when I call a feature file for test clean using afterFeature hook. The cleanup feature file is called correctly because I can see the print from Background section of the file, but for some reason the execution hangs for Scenario Outline.
I have tried running feature with Junit5 runner and also in IntelliJ IDE by right clicking on feature file but get the same issue, the execution hangs.
This is my main feature file:
Feature: To test afterFeature hook
Background:
* def num1 = 100
* def num2 = 200
* def num3 = 300
* def dataForAfterFeature =
"""
[
{"id":'#(num1)'},
{"id":'#(num2)'},
{"id":'#(num3)'}
]
"""
* configure afterFeature = function(){ karate.call('after.feature'); }
Scenario: Test 1
* print 'Hello World 1'
Scenario: Test 2
* print 'Hello World 2'
The afterFeature file:
#ignore
Feature: Called after calling feature run is completed
Background:
* def dynamicData = dataForAfterFeature
* print 'dynamicData: ' + dynamicData
Scenario Outline: Print dynamic data
* print 'From after feature for id: ' + <id>
Examples:
| dynamicData |
The execution stalls at Scenario Outline. I can see the printed value for dynamicData variable in console but nothing happens after that.
Seems like the outline loop is not starting or has crashed? Was not able to get details from log as the test has not finished or there is no error reported. What else can I check or what might be the issue?
If not easily reproducible, what test cleanup workaround do you recommend?
For now, I have done the following workaround where I have added a test clean-up scenario at the end of the feature that has tests. Have stopped parallel execution for these tests and to be honest I do not mind these tests not running in parallel as they are fast to run anyways.
Ids to delete:
* def idsToDelete =
"""
[
101,
102,
103
]
"""
Test clean up scenario:
# Test data clean-up scenario
Scenario: Delete test data
# Js method to call delete data feature.
* def deleteTestDataFun =
"""
function(x) {
var temp = [x];
// Call to feature. Pass argument as json object.
karate.call('delete-test-data.feature', { id: temp });
}
"""
* karate.forEach(idsToDelete, deleteTestDataFun)
Calls the delete test data scenario and passes it a list of ids that needs to be deleted.
Delete test data feature:
Feature: To delete test data
Background:
* def idVal = id
Scenario: Delete
Given path 'tests', 'delete', idVal
Then method delete
Yeah I personally recommend a strategy to pre-clean-up always, because you cannot guarantee that an "after" hook gets called, e.g. if the machine is switched off.
Sometimes the simplest option is to do this as plain old Java code in your JUnit test-suite. So maybe a one-line after using Runner is sufficient.
It gets tricky if you need to keep track of dynamic data that your tests have created. What I would do is write a Java singleton, use it in your tests to "collect" the ID-s that need to be deleted, and then use this in your JUnit class. You can use things like #AfterClass.
Please try and replicate using the instructions here: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue - because this can indeed be a bug with Scenario Outline.
Finally, you can evaluate ExecutionHook which has an afterSuite() callback: https://github.com/intuit/karate/issues/970#issuecomment-557443551
EDIT: in 1.0 - it has become RuntimeHook: https://github.com/intuit/karate/wiki/1.0-upgrade-guide#hooks

Cypress tests with mocha multi reports; not able to get aggregated results for all test spec

Cypress tests with mocha multi reports don't show results from all the tests
My test structure looks like so:
cypress
 integration
  module1
   module1test1_spec.js
   module1test1_spec.js
  module2
   module2test1_spec.js
   module2test1_spec.js
I have set up Cypress to use mocha-multi-reports like in instruction provided under https://docs.cypress.io/guides/tooling/reporters.html#Multiple-Reporters
My config.json looks exactly like here: https://github.com/cypress-io/cypress-example-docker-circle#spec--xml-reports
When Cypress finishes testing, results.xml file shows results from last test spec ONLY; module2test1_spec.js
How to configure this to get the aggregated results from all test spec?
You can use [hash].xml in your path.
e.g. ./path_to_your/test-results.[hash].xml. [hash] is replaced by MD5 hash of test results XML. This enables support of parallel execution of multiple mocha-junit-reporter's writing test results in separate files.
https://www.npmjs.com/package/mocha-junit-reporter#results-report
I solved this problem with this way.
my config.json file seems like this:
"reporterEnabled": "spec,json, mocha-junit-reporter",
"mochaJunitReporterReporterOptions": {
"mochaFile": "multiple-results/[hash].xml",
Just to add to the answer above, if you would like to merge all these multiple [hash].xml created into a single mergedreport.xml report, then you can use this package junit-report-merger which is useful for running it on CI pipeline which usually looks for a single reporter with a command like the one below:
jrm ./cypress/reports/mergedreport.xml "./cypress/reports/*.xml"

CircleCI get artifact result page using an Access Token

Context
Using Ruby, I'm trying to get the content of the last artifact that is a result of a SimpleCov coverage report.
I'm able so far to retrieve the last artifact with this code (using an Access Token) :
module Ci
class Circle
# more code
def artifacts
#artifacts = self.class.get("/project/github/#{#user}/#{#project}/latest/artifacts?circle-token=#{ENV['CI_ACCESS_TOKEN']}")
self
end
# more code
end
end
Then, I'm retrieving the index page which I want to get the content from, with this code:
def report
artifacts = JSON.parse(#artifacts.parsed_response)
artifacts.each do |response|
url = response['url']
return url if url.end_with? '.html'
end
nil
end
Problem
I want to get the content of the coverage page, located at https://3-57932222-gh.circle-artifacts.com/0//tmp/circle-artifacts.64VZY8w/coverage/index.html, but when logged out, it shows Must be log in. I've tried to check the CircleCI documentation but were not able to find out how to pass an access token to this page in order to get it's content from the command line (where I'm obviously not log in). Ideas?

How to count tags on running scenarios in Ruby Cucumber?

I've a feature file with multiple scenarios and different tags for each of the scenarios. I'm running my Cucumber test using the rake command with a specific tag and am creating a custom HTML report.
The custom HTML report is created in a After hook. I am facing a problem as to how to get the count of the scenarios when I'm running with rake command. I use the
scenario.feature.feature_elements.size
to get the count of the total scenarios, but this gives the total scenarios count of the feature file and I'm trying to get only the scenarios count which are tagged with a specific tag.
In a Before hook, keep a count of each scenario's tags in a global as you run them:
Before do |scenario|
$tag_counts ||= {}
scenario.tags.map(&:name).each do |tag|
$tag_counts[tag] ||= 0
$tag_counts[tag] += 1
end
end
After all scenarios have run, you should be able to use the contents of the global in your custom reporter.

Resources