Is there a way to get a measure of the number of tests in SonarQube JavaScript project?
We currently only have coverage metrics, and SonarQube even identifies the test files as 'Unit test', but I can't find a measure for test count anywhere.
In contrast, on my Java project I do have a test count measure.
Coverage metrics is provided by SonarJS, while test count is not. You need to use Generic Test Coverage plugin (by providing test execution results in xml format) in order to get it.
Related
In the build process of my application I use SonarQube to show some statistics regarding code quality. In particular we use it to show the code coverage of some tests that are executed nightly.
To calculate and retrieve code coverage data I use the JaCoCo Maven plugin and agent this way:
mvn org.jacoco:jacoco-maven-plugin:0.7.8:dump \
sonar:sonar
-Djacoco.address=TEST_SERVER \
-Djacoco.destFile=/proj/coverage-reports/jacoco-it.exec \
-Dsonar.projectKey=TEST \
-Dsonar.projectName=TEST \
-Dsonar.branch=TEST \
-Dsonar.jacoco.itReportPath=/proj/coverage-reports/jacoco-it.exec
The code coverage calculated in this way is correct (as it reflects the expected coverage of some tests which are executed nightly).
But there are cases where I cannot execute the jacoco:dump goal to retrieve the code coverage statistics. In those cases, executing sonar:sonar will bring the existing code coverage statistics (calculated with previous executions of the jacoco:dump goal) to 0 because Sonar assumes that if no statistic is sent that statistic does not exist.
What I would like to do is that if I do not dump and calculate the code coverage index with JaCoCo the code coverage measures in SonarQube are not lost but are equal to the last calculated one.
Is there any way to instruct maybe the SonarQube server or the Maven Sonar plugin to preserve past code coverage statistics?
Kindly suggest automating Test Data generation in Jmeter is good for performance testing or not?
If i have to generate test data in large numbers, will it impact the performance testing in negative manner?
Example: If i have to generate Username, email id in large numbers and fetch them in script by using function and random variable,will Jmeter consume more time for the fetching process and will this affect the response time results?
Could anyone kindly suggest the pron's and con's of automating test data generation in performance testing?
As long as generated data fits into Java Heap it is fine to generate test data on-the-fly.
There should not be any impact on response time as Pre/Post processors and Timers duration is not being counted (unless you have Transaction Controller in Generate Parent Sample mode)
Make sure you following recommendations from 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure guide to get the most of your JMeter instance(s).
I want to run a test every 3 days and compare some of the results with the last test run. What is the best way to achieve this? I have considered writing the results to files and reading the values for comparison in the next test but having difficulty generating unique file names automatically and having the test recognise which one to use in the next test run.
If you are using Jenkins to run your test periodically you can use `Performance Plugin' of jenkins for JMeter to compare the results of every run.
For more details: http://www.testautomationguru.com/jmeter-continuous-performance-testing-part2/
You can also use Grafana to compare the results.
For more details: http://www.testautomationguru.com/jmeter-real-time-results-influxdb-grafana/
Blazemeter sense - and you need this plugin to upload the results to Blazemeter sense.
I'm not sure how to capture the testing result data related to unit tests each time a unit test is run. I use Bamboo as a continuous integration server. It works great, basically makes a build of your project every time you submit code, and send you an email if the build failed / you screwed up somewhere. I would like to begin having Bamboo running full unit tests as well as builds. I would also like to begin gathering data about said unit tests.
My question is, I know in a lot of dif programs you can track the number of lines of code changed, and the total lines of code period in the entire program. I also know that with unit testing, it gives you data such as the number of passes / failures. What I would like to do is automatically gather this data, among other data such as defect density, etc.
My C++ project uses googletest to produce XML results in the JUnit format for Jenkins. This is working well for pass/fail results and test durations.
Some of my tests measure code performance and assert that this exceeds some threshold. I would like to extend this to charting the performance data over successive builds. I use the googletest RecordProperty method to log additional information in the XML:
<testcase name="MyTest" status="run" time="3.964" classname="MyTestSuite" PerformanceData="131" />
How can I configure Jenkins or one of its plugins to chart PerformanceData (or an equivalent record) across successive builds?
You could try the Plot Plugin to plot the performance numbers.