Sonarqube with smalll medium large test org - sonarqube

I am trying to apply sonarqube analysis to our system. The system is roughly laid out in multiple modules. We have some small and medium tests in each module - and hope to create large tests in the future. We are trying to use "Google" test naming.
Sonarqube seems to refer to unit and integration (roughly equivalent to small and medium in our environment). I'm wondering if anyone knows a simple way to modify the labeling to better match what we are trying to setup.

This is not possible to change the labels in SonarQube. Unit and integration tests are two very common types of tests, IMO you should stick to this convention.
Just to share some information: at SonarSource, we have unit tests, medium tests and integration tests, and when we analyse our code on our internal SonarQube instance, the medium tests end up in the "unit test" category (they are executed at the same time BTW).

I was able to switch the labels Unit -> Small and Integration -> Medium by creating a language pack plugin. I started from the French language pack, and modified the existing core.properties file. The solution gives me a "localized" version of site using our naming convention.

Related

How to maintain jmeter scripts in an agile environment

We are following agile for our s/w development & we get a new build every 2 days. So how should i maintain my jmeter scripts. Which tool will help me in maintaining the scripts so that i can see the gradual improvement or degradation of our product
Most probably you're looking for a continuous integration tool, if you have a build server which compiles your product, runs unit tests, packages it, etc. you can add your JMeter tests as an extra step to act as regression test
If you don't have specific continuous integration solution in mind I'd recommend going for Jenkins, it's free, open source and Java-based so you won't require to setup an "alien" runtime.
With regards to JMeter and Jenkins integration there is Performance Plugin which is capable of displaying performance trends you're looking for across builds and also can mark builds as unstable or failed depending on metrics thresholds.

How to set up Jenkins for build, unit test and system tests

I want to set up Jenkins for a decent build chain for a JavaFX application that controls a robotic arm and other hardware:
We are using BitBucket with the Git Flow model.
We have 2 development modules and 1 System Test module in IntelliJ that we build with Maven.
We have Unit tests, Integration test and System tests. Unit and Integration tests use JUnit and can run on the Jenkins master, for the System tests we use TestNG and they can run the TestFX tests on a Jenkins agent.
(I think TestNG is more suited for System tests than JUnit)
Development build project (build, unit+integration tests) was already in place. The Test chain has been recently set up by copying the development project, adding the system tests and ignoring the Unit/Integration tests. (so building the application is done twice)
We have 2 types of System tests:
Tests that are fairly simple and run on the application itself
Tests that are more complex and run on the application that interacts with several simulators for the robotic arm
Now I need to set up the 2nd type of tests.
My question would be: what is the best way to set this up in Jenkins?
I'm reading about Jenkins Pipelines and the Blue Ocean plugin here, about a matrix configuration project here. To me it is all a bit confusing what is the ideal way to achieve my goals.
I have no clue how to scale from a testng.xml file in my SystemTest module to flexible tests.
Can I put a sort of capabilities tag to tests so that the correct preconditions are set? For example, for tests in category 1, only the main application needs to be started for TestFX. However, for tests in the category 2, several simulators needs to be started and configured. I think using a sort of capabilities tag, will make this much more maintainable.
My goals:
Easy to maintain Jenkins flow
Efficient building, so preference to copying artifacts instead of building a second time
Possibility to split the system tests over multiple agents, preferably without me having to be concerned about what runs where (similar to Selenium Grid)
Correct dependencies (simulators etc) are started depending if the test needs them
We are looking into running the tests on VMs with OpenGL 3D acceleration due to a canvas used in the application. If tests are able to allocate, start, stop VMs on demand, that would be cool (but would only save some electricity)
Easy reporting where all test results are gathered from all agents. Notice that I prefer the JUnit report that highlights which tests were #Ignored. TestNg report format, doesn't say anything about #Ignored tests.

What does xUnit compatibility entail?

I was researching unit testing frameworks and the Wikipedia list has a column which lists whether a framework is considered the "xUnit type" or "compatible". Mocha was listed as not being of the "xUnit type" – why? What are the core features of the xUnit family?
XUnit frameworks share the following concepts:
Test runner - the program that runs the tests
Test case - Where all tests inherit from
Test fixture - the state needed to run the tests
Test suites - tests that share the same fixture
Assertion - the function that verifies the state of the test
Test formatter - shows the results un one or more formats. This is a bit tricky, since the formats are not always the same. For example, Inria's page specifies the xml tags as test-case and test-suite. JUnit, on the other hand, uses testcase and testsuite.
You can think of XUnit frameworks as *Unit... where the * is replaced by the language (e.g., JUnit for Java).
What's very tricky is that XUnit.net is different from XUnit. XUnit.net is a framework on itself that also incorporates these aforementioned concepts. The XML output format uses different tags though... such as assembly, class, etc. It can get very confusing when googling for issues.

TDD: refactoring and global regressions

While the refactoring step of test driven development should always involve another full run of tests for the given functionality, what is your approach about preventing possible regressions beyond the functionality itself?
My professional experience makes me want to retest the whole functional module after any code change. Is it what TDD recommends?
Thank you.
While the refactoring step of test driven development should always
involve another full run of tests for the given functionality, what is
your approach about preventing possible regressions beyond the
functionality itself?
When you are working on specific feature it is enough to run tests for the given functionality only. There is no need to do full regression.
My professional experience makes me want to retest the whole
functional module after any code change.
You do not need to do full regression but you can, since Unit tests are small, simple and fast.
Also, there are several tools that are used for "Continuous Testing" in different languages:
in Ruby (e.g Watchr)
in PHP, (e.g. Sismo)
in .NET (e.g. NCrunch)
All these tools are used to run tests automatically on your local machine to get fast feedback.
Only when you are about to finish implementation of the feature it is time to do full run of all your tests.
Running tests on Continuous integration (CI) server is essential. Especially, when you have lots of integration tests.
TDD is just a methodology to write new code or modify old one. Your entire tests base should be ran every time a modification is done to any of the code file (new feature or refactoring). That's how you ensure no regression has taken place. We're talking about automatic testing here (unit-tests, system-tests, acceptance-tests, sometimes performance tests as well)
Continuous integration (CI) will help you achieve that: a CI server (Jenkins, Hudson, TeamCity, CruiseControl...) will have all your tests, and run them automatically when you commit a change to source control. It can also calculate test coverage and indicate where your code is insufficiently tested (note if you do proper TDD, your test coverage should always be 100%).

XCode 4 - 'Include Unit Tests'

I just upgraded to XCode 4 and I was wondering if I need to 'include unit tests' when setting up an application? Also, what does that mean exactly?
You do not need to include unit tests.
What does "unit testing" mean? (from the unit-testing FAQ)
Unit testing is a method by which individual units of source code are tested to determine if they are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual function or procedure. Unit tests are created by programmers or occasionally by white box testers.
Ideally, each test case is independent from the others: substitutes like method stubs, mock objects, fakes and test harnesses can be used to assist testing a module in isolation. Unit tests are typically written and run by software developers to ensure that code meets its design and behaves as intended.Wikipedia
Unit testing is closely related to Test Driven Development.
#ToddH points out:
It's easier to include [unit tests] when you setup the project. If you do it later there are quite a few steps involved in doing it correctly: http://twobitlabs.com/2011/06/...
Thanks for the protip, Todd!

Resources