I need to create a build in such a way that, unit test, integration test and performance test can be executed separately using Maven and Gradle.
Acceptance Criteria :
Seperate Build files (Maven and Gradle) for unit test, Integration test and Performance test.
The test execution flexible enough to be executed on Demand like
Unit test Alone (Default profile)
Unit Test & Integration Test (Profile With Integration Test)
Unit test and Performance test.(Profile With PerformanceTest)
Start reading the documentation:
https://maven.apache.org/guides/
http://books.sonatype.com/mvnref-book/reference/
https://docs.gradle.org/current/userguide/userguide.html
Related
Scenario
Compile code & run unit tests
Execute sonar scan on code
Once the scan completes, code coverage for unit tests + other analysis gets stored in sonar (lets call report1)
All good till now. Now once the application is deployed, we are run some integrations tests and manual tests are also performed.
All these tests will generate code coverage.
Now we have to merge unit test, integrations & manual tests code coverage and store it on sonar under same analysis report (basically update report1)
Any thoughts/suggestions for best practice or solution for same on sonar?
It's pretty simple. Don't run the SonarQube scan until you've produced all the data you want to integrate into the report.
However, I would also point out that it's generally not worthwhile to generate code coverage for anything but unit tests. If you can't reach it with a unit test, I don't see how you could reach it with any other test.
There is no distinction in SonarQube between unit tests and integration tests. That is a problem.
We have large tests (or integration tests) which stretch across external systems. When these tests fail, only by human evaluation it can be decided whether it is a failure of our application or of the external system. If it is a external system failure than we can ignore it.
If SonarQube would have the distinction between unit tests and integration tests, then I would configure that the quality gate fails if more than 0 unit tests fail and allow integration tests to fail.
Because there is no such distinction anymore in SonarQube we have to solve it in the Jenkins pipeline, expecting all non-large tests (unit tests) to succeed and allowing large tests (integration tests) to fail.
I would prefer not to display any test results in the Jenkins build but have it only in SonarQube. But SonarQube is not capable of that logic and it does not separate the test results (large and non-large).
Maybe SonarQube will reconsider that decision to combine all different kinds of test results? It should be even more flexible than unit/integration tests and allow groups of test to be customized, like small/medium/large tests, with quality gates for each of them.
Or is there a better solution then we currently use?
We've been adding an increasing number of Groovy (Spock) unit tests to our existing suite of Java (JUnit) tests.
We've configured things correctly to get Spock code coverage listed in Sonar, but the "Unit Test Success" listings - Tests, Failures, Errors, Skipped tests - only shows for the Java tests.
What configuration do we need to add for the Spock tests to report correctly?
Thank you
Those results in Java are fed by the Unit Test execution report, which is separate from the coverage report. The docs tell you how to feed that data into an analysis.
Well, I would like to have a maven goal execute-custom-tests inside my custom-maven-plugin that consists of running test methods (This tests are not unit tests). Something similar to test goal of soapui-pro-maven-plugin, for example.
Why? Basically the main objectives of the plugin are testing stuff (not unit testing) and the tests in src/test are for unit testing, right?
Being more specific I was thinking about something like this:
#Mojo (name = "run-custom-tests", LifecyclePhase.TEST)
public class TesterMojo extends AbstractMojo {
#Parameter(property = "someParameter")
private String someParameter;
// [...] parameters for test configuration
#Override
public void execute() throws MojoExecutionException, MojoFailureException {
// Piece of code that executes a set of custom tests which procedure I specified.
}
}
When test fail, I would like them to be marked as failed tests not as failed executions. What's the right thing to do here? Show me the light, please.
Maven conventions support two types of testing out of the box: unit tests (via maven-surefire-plugin) and integration tests (via maven-failsafe-plugin).
By default, maven-surefire-plugin only looks for the following files with unit tests:
**/Test*.java
**/*Test.java
**/*TestCase.java
Similarly, default includes for integration tests run by maven-failsafe-plugin are the following:
**/IT*.java
**/*IT.java
**/*ITCase.java
So, as you can see, Maven lets each plugin figure out which tests it should care about. So it's perfectly fine for src/test/java to contain different types of tests, not just unit tests.
Different folder
You can put tests in a different folder too. One example would be if you have non-Java tests, since then src/test/java location doesn't make sense. Standard Maven plugins get project model from Maven to figure out the src/test/java location and some 3rd party plugins use the same mechanism. Depending on the plugin you use, you might want to check out its configuration or use maven-build-helper-plugin to add-test-source in order for some plugins to pick up another test folder automatically.
Different tests on demand
From the Maven perspective the core difference between unit tests and integration tests is the additional requirements for the later: they often need to have your project already packaged and they often need additional setup or teardown. But you yourself can set up multiple test goals during both test and integration-test phases. All major test frameworks support specifying which test suite should be run when (e.g., via groups). If your framework doesn't, you can still use plugin includes/excludes. It is a standard practice to combine this with Maven profiles in order to only run smoke tests by default (during development) and to run full tests on CI environment. You can use the same approach to enable anyone (a tester?) to run extra tests on demand, e.g., to run extra heavy tests when certain important part of the code has changed.
Hi is there plan to add support for 'configfailurepolicy' property (http://testng.org/doc/documentation-m...) for TestNG in gradle ?
It's really inconvenient that if some test fails in Before* phase, all other tests are skipped (even unrelated tests in different classes)