Sonar False-Positive Feature - sonarqube

I am using sonar false-positive feature in my project deployed on sonar server and i have marked some violation instances(lets 50 instances) as false positive.
Now i create a new project in sonar having the same code base and deploy it on sonar. As code base is same for both of my projects this is obvious that those "50" violation instances will occurs here also, which i have marked as false-positive in my previous project.
Now i dont want to spend time to mark these instances as false-positive again so i want to ask is there any way to mark these "5o" violation instances as false-positive by refering my first project without doing manually??
Can i make a template/profile type feature to copy false-positive marks from one project and apply it on other project having same code base so that i can save my time??
Kindly revert if anyone know any way to execute this.
Your response will be appreciable..
Thankks in advance!

It is not currently possible to achieve what you want, unless you write a small Java program that uses Sonar Web Service Java client and that does the job.

The only trick I found was to add a // NOSONAR comment on line containing the false positive.
This way, the information is shared among branches.
But, as the NOSONAR masks any sonar issue, you may miss another sonar issue as the one intended to mask.
Example:
var myVar; // NOSONAR

Related

Why is Jenkins.get().getRootUrl() not available when generating DSL?

I'm debugging a problem with atlassian-bitbucket-server-integration-plugin. The behavior occurs when generating a multi-branch pipeline job, which requires a Bitbucket webhook. The plugin works fine when creating the pipeline job from the Jenkins UI. However, when using DSL to create an equivalent job, the plugin errors out attempting to create the webhook.
I've tracked this down to a line in RetryingWebhookHandler:
String jenkinsUrl = jenkinsProvider.get().getRootUrl();
if (isBlank(jenkinsUrl)) {
throw new IllegalArgumentException("Invalid Jenkins base url. Actual - " + jenkinsUrl);
}
The jenkinsUrl is used as the target for the webhook. When the pipeline job is created from the UI, the jenkinsUrl is set as expected. When the pipeline job is created by my DSL in a freeform job, the jenkinsUrl is always null. As a result, the webhook can't be created and the job fails.
I've tried various alternative ways to get the Jenkins root URL, such as static references like Jenkins.get().getRootUrl() and JenkinsLocationConfiguration.get().getUrl(). However, all values come up empty. It seems like the Jenkins context is not available at this point.
I'd like to submit a PR to fix this behavior in the plugin, but I can't come up with anything workable. I am looking for suggestions about the root cause and potential workarounds. For instance:
Is there something specific about the way my freeform job is executed that could cause this?
Is there anything specific to the way jobs are generated from DSL that could cause this?
Is there another mechanism I should be looking at to get the root URL from configuration, which might work better?
Is it possible that this behavior points to a misconfiguration in my Jenkins instance?
If needed, I can share the DSL I'm using to generate the job, but I don't think it's relevant. By commenting out the webhook code that fails, I've confirmed that the DSL generates a job with the correct config.xml underneath. So, the only problem is how to get the right configuration to the plugin so it can set up the webhook.
It turns out that this behavior was caused by a partial misconfiguration of Jenkins.
While debugging problems with broken build links in Bitbucket (pointing me at unconfigured-jenkins-location instead of the real Jenkins URL), I discovered a yellow warning message on the front page of Jenkins which I had missed before, telling me that the root server URL was not set:
Jenkins root URL is empty but is required for the proper operation of many Jenkins features like email notifications, PR status update, and environment variables such as BUILD_URL.
Please provide an accurate value in Jenkins configuration.
This error message had a link to Manage Jenkins > Configure System > Jenkins Location. The correct Jenkins URL actually was set there (I had already double-checked this), but the system admin email address in the same section was not set. When I added a valid email address, the yellow warning went away.
This change fixed both the broken build URL in BitBucket, as well as the problems with my DSL. So, even though it doesn't make much sense, it seems like the missing system admin email address was the root cause of this behavior.

SonarQube Generic Execution Report is ignored

The whole morning I have been trying to setup e2e tests reporting via SonarQube's Generic Execution, by using the Generic Test Data -> Generic Execution feature.
I created a custom xml report that gets added to the scan properties like this:
sonar.testExecutionReportPaths=**/e2e-report.xml
So far, SonarQube seems to completely ignore this property and I no attempt to parse the file in the logs. Has anyone made it work?
These are links by Sonar about the Generic Execution feature:
https://docs.sonarqube.org/display/SONAR/Generic+Test+Data
https://github.com/SonarSource/sonarqube/blob/master/sonar-scanner-engine/src/main/java/org/sonar/scanner/genericcoverage/GenericTestExecutionSensor.java
This is a SonarQube 6.2+ feature. Make sure to use an appropriate SonarQube version.
In addition sonar.testExecutionReportPaths does not allow matchers (like *).
Please provide relative or absolute paths, comma separated.
See also:
The official documentation of the Generic Test Data feature
The source code, that looks up the generic execution files

Sonar: Execution time history of single test

TXTFIT = test execution time for individual test
Hello,
I'm using Sonar to analyze my Maven Java Project. I'm testing with JUnit and generating reports on the test execution time with the Maven Surefire plugin.
In my Sonar I can see the test execution time and drill down to see how long each individual test took. In the time machine I can only compare the overall test execution time between two releases.
What I want is to see how the TXTFIT changed from the last version.
For example:
In version 1.0 of my software the htmlParserTest() takes 1sec to complete. In version 1.1 I add a whole bunch of test (so the overall execution time is going the be way longer) but also the htmlParserTest() suddenly takes 2secs, I want to be notified "Hey mate, the htmlParserTest() takes twice as long as it used to. You should take a look at it".
What I'm currently struggling to find out:
How exactly do the TXTFIT get from the surefire xml report into sonar?
I'm currently looking at AbstractSurefireParser.java
but I'm not sure if that's actually the default surefire plugin.
I was looking at 5 year old stuff. I'm currently checking out this. Still have no idea, where Sonar is getting the TXTFIT from and how or where it is connecting it to the Source Files.
Can I find the TXTFIT in the Sonar DB?
I'm looking at the local DB from my test Sonar with DBVisualizer and I don't really know where to look. The SNAPSHOT_DATA doesn't seem like it's readable by humans.
Are the TXTFIT even saved in the DB?
Depending on this I have to write a Sensor that actually saves them or a widget that simply shows them on the dashboard
Any help is very much appreciated!
The web service api/tests/* introduced in version 5.2 allows to get this information. Example: http://nemo.sonarqube.org/api/tests/list?testFileUuid=8e3347d5-8b17-45ac-a6b0-45df2b54cd3c

How to enqueue more than one build of the same configuration?

We are using two teamcity servers (one for builds and one for GUI tests). The gui tests are triggered from http GET in the last step of the build. (as in http://confluence.jetbrains.com/display/TCD65/Accessing+Server+by+HTTP)
The problem is, that there is only one configuration of the same kind in the queue at the same time. Is there a way to enable multiple starts of the same configuration? Can I use some workaround like sending a dummy id?
At the bottom the section "Triggering a Custom Build" here: http://confluence.jetbrains.com/display/TCD65/Accessing+Server+by+HTTP, you can find information about passing custom parameters to the build.
Just define some unused configuration parameter, like "BuildId", and pass, for example, current date (guid will work as well) to it
...&buildId=12/12/23 12:12:12

Is there a way to suppress SQL03006 error in VS2010 database project?

First of all, I know that the error I am getting can be resolved by creating reference project (of type Database Server) and then referencing it in my Database project...
However, I find this to be overkill, especially for small teams where there is no specific role separation between developers and db admins..But, let's leave this discussion for another time... Same goes for DACs...Can't use DAC b/c of limited objects supported...
Question
Now, the question is: Can I (and how), disable SQL03006 error when building my Database project. In my case this error is generated because I am creating some users whose logins are "unresolved"...I think this should be possible I hope, since I "know" that logins will exist on the server before I deploy the script...I also don't want to maintain database server project just so I can keep refs resolved (I have nothing besides logins at server level)...
Workaround
Using pre/post deployment scripts, it is trivial to get the secript working...
Workaround Issue
You have to comment out user scripts (which use login references) for workaround...
As soon as you do that, the .sqlpermissions bomb out, saying there is no referenced users...And then you have to comment permissions out and put them in post deploy scripts...
The main disadvantage of this workaround is that you cannot leverage schema compare to its fullest extent (you have to specify to ignore users/logins/permissions)
So again, all I want is
1. to maintain only DB project (no references to DB Server projects)
2. disable/suppress SQL03006 error
3. be able to use schema compare in my DB project
Am I asking for impossible? :)
Cheers
P.S.
If someone is aware of better VS2010 database project templates/tools (for SQL Server 2008 R2) please do share...
There are two workarounds:
1.
Turn off any schema checking (Tools > Options > Database Tools > Schema Compare > SQL Server 200x, then the Object Type tab) for anything user or security related. This is a permanent fix
2.
Go through the schema comparison and mark anything user or security related as Skip and then generate your SQL compare script. This is a per schema comparison fix.
It should be obvious but if you already have scripts in your project that reference logins or roles then delete them and they won't get created.

Resources