Sonarqube Analysis org.joda.convert ERROR - sonarqube

Why do I get this error:
Class not found: org.joda.convert.ToString
None of my code uses this class (maven-based sonar analysis), and it doesn't seem to affect the analysis. However, I get worried whenever there are "[ERROR]" logs in the output. My exact command is:
mvn org.codehaus.mojo:sonar-maven-plugin:2.6:sonar
I switched to this because someone in another related answer suggested this...

This error message is logged by the analyzer whenever it tries to complete a symbol during semantic analysis and cannot find a .class file.
This will happen wether your classes are using this class directly or transitively (via a dependence, or a dependence of a dependence, etc.). This is arguably an error per say in all cases but this is an important information for users to know because lacking some classes can lead to incomplete results (some issues might not be raised because symbols won't be resolved).

Related

Exception: java.lang.ClassCastException: com.blazemeter.jmeter.threads.DynamicThread cannot be cast to org.apache.jmeter.samplers.SampleResult

I am currently using Blazemeter to run load (performance) tests for a Java application, but I am getting this error. When I run it locally on my machine, this error does not occur.
Under 'scenario definition' my test starts off with a standard jmeter thread group. I did a bit of searching and realised that com.blazemeter.jmeter.threads.DynamicThread comes from the CustomThreadGroups plugin, so I have also uploaded the appropriate jar file plugin jmeter-plugins-casutg-2.9.jar file.
Screenshot of scenario definition
Screenshot of error
Would just like to check if there is some other jar file that I need to upload to solve this error, or if there is another method to solve this issue? Thank you.
Normally you should raise this form of questions to BlazeMeter Support as they should have better understanding of their infrastructure.
With regards to your question itself, most probably you're suffering from a form of a Jar Hell as:
BlazeMeter uses Taurus under the hood for kicking off JMeter tests which automatically downloads JMeter Plugins so it might be the case your plugins versions clash with the plugins at their end
You need to remove one of joda-time libraries as you cannot tell for sure which one will be loaded into classpath and in case of API inconsistency you can get unpredictable errors.

Why would I suddenly get 'KerberosName$NoMatchingRule: No rules applied to user#REALM' errors?

We've been using Kerberos auth with several (older) Cloudera instances without a problem but now are now getting 'KerberosName$NoMatchingRule: No rules applied to user#REALM' errors. We've been modifying code to add functionality but AFAIK nobody has touched either the authentication code or the cluster configuration.
(I can't rule it out - and clearly SOMETHING has changed.)
I've set up a simple unit test and verified this behavior. At the command line I can execute 'kinit -kt user.keytab user' and get the corresponding Kerberos tickets. That verifies the correct configuration and keytab file.
However my standalone app fails with the error mentioned.
UPDATE
As I edit this I've been running the test in the debugger so I can track down exactly where the test is failing and it seems to be succeed when run in the debugger!!! Obviously there's something different in the environments, not some weird heisenbug that is only triggered when nobody is looking.
I'll update this if I find the cause. Does anyone else have any ideas?
Auth_to_local has to have at least one rule.
Make sure you have “DEFAULT” rule at the very end of auth_to_local.
If none rules before match, at least DEAFULT rule would kick in.

SonarQube Generic Execution Report is ignored

The whole morning I have been trying to setup e2e tests reporting via SonarQube's Generic Execution, by using the Generic Test Data -> Generic Execution feature.
I created a custom xml report that gets added to the scan properties like this:
sonar.testExecutionReportPaths=**/e2e-report.xml
So far, SonarQube seems to completely ignore this property and I no attempt to parse the file in the logs. Has anyone made it work?
These are links by Sonar about the Generic Execution feature:
https://docs.sonarqube.org/display/SONAR/Generic+Test+Data
https://github.com/SonarSource/sonarqube/blob/master/sonar-scanner-engine/src/main/java/org/sonar/scanner/genericcoverage/GenericTestExecutionSensor.java
This is a SonarQube 6.2+ feature. Make sure to use an appropriate SonarQube version.
In addition sonar.testExecutionReportPaths does not allow matchers (like *).
Please provide relative or absolute paths, comma separated.
See also:
The official documentation of the Generic Test Data feature
The source code, that looks up the generic execution files

Sonar False-Positive Feature

I am using sonar false-positive feature in my project deployed on sonar server and i have marked some violation instances(lets 50 instances) as false positive.
Now i create a new project in sonar having the same code base and deploy it on sonar. As code base is same for both of my projects this is obvious that those "50" violation instances will occurs here also, which i have marked as false-positive in my previous project.
Now i dont want to spend time to mark these instances as false-positive again so i want to ask is there any way to mark these "5o" violation instances as false-positive by refering my first project without doing manually??
Can i make a template/profile type feature to copy false-positive marks from one project and apply it on other project having same code base so that i can save my time??
Kindly revert if anyone know any way to execute this.
Your response will be appreciable..
Thankks in advance!
It is not currently possible to achieve what you want, unless you write a small Java program that uses Sonar Web Service Java client and that does the job.
The only trick I found was to add a // NOSONAR comment on line containing the false positive.
This way, the information is shared among branches.
But, as the NOSONAR masks any sonar issue, you may miss another sonar issue as the one intended to mask.
Example:
var myVar; // NOSONAR

OSGi - Candidate permutation failed due to a conflict between imports

I am in a situation where my Felix OSGi container will not start properly after deploying groovy via:
obr:deploy "Groovy Scripting Languge"#1.7.3
Managed to deploy & got Groovy stuff running, until I did a restart on my OSGi container...then most of the bundles will not start. FWIW, I am pretty sure Groovy is not the cause even though there's a typo in its bundle name. :-)
After some troubleshooting, turning on Felix's wire logging (thank god!), i noticed this (among all the other failed bundles, similar cause):
2011-04-03 16:26:43,108 DEBUG [FelixStartLevel] felix.wire - Candidate permutation failed due to a conflict between imports; will try another if possible. (org.apache.felix.framework.resolver.ResolveException: Unable to resolve module org.apache.felix.http.bundle [36.0] because it is exposed to package 'org.osgi.framework' from org.apache.felix.framework [0] and com.springsource.org.aspectj.tools [47.0] via two dependency chains.
Chain 1:
org.apache.felix.http.bundle [36.0]
import: (&(package=org.osgi.framework)(version>=1.3.0))
|
export: package=org.osgi.framework
org.apache.felix.framework [0]
Chain 2:
org.apache.felix.http.bundle [36.0]
import: (&(package=org.osgi.service.log)(version>=1.3.0))
|
export: package=org.osgi.service.log; uses:=org.osgi.framework
osgi.cmpn [15.0]
import: (&(package=org.osgi.framework)(version>=1.5.0)(!(version>=2.0.0)))
|
export: package=org.osgi.framework
com.springsource.org.aspectj.tools [47.0])
Seems like both o.a.felix.framework and c.s.o.aspectj.tools are exporting o.osgi.framework.
I am able to get things running again by removing bundle id 47 (c.s.o.aspectj.tools), but yet to check if there's other implications. Feels wrong because I removed c.s.o.aspectj.tools but it was indicated as required (or optional) for Groovy by the OBR repository. In fact, it was installed via the obr:deploy command for Groovy.
Feels like c.s.o.aspectj.tools should not be exporting o.osgi.framework, but that's just a guess as i do not use aspectj tools stuff.
Question: What is the proper way to resolve such issues without resorting to educated guesses?
You are right, c.s.o.aspectj.tools should not be export org.osgi.framework, but apparently it does. To be precise, I checked version 1.6.8, and that has the following export statement:
org.osgi.framework;version="1.6.8.RELEASE"
Furthermore, it does not import org.osgi.framework. This is plain wrong, and I'd say it's worth a bugreport with the Spring team; if you export, you should usually import, and I can't think of a valid reason to tag osg.osgi.framework with a different version than it actually has.
How can you get around this for now? The problem with the two resolution chains can be resolved by wiring both http and cmpn to the same framework package; perhaps even a simple osgi:refresh in the shell could help you, since the declared version (1.6.8.RELEASE) is within the import ranges of both http and cmpn, by accident.
If you don't really need the aspectj stuff, I would leave it out.

Resources