Manual Measures / Metrics in SonarQube - metrics

I'm on a project working with SonarQube and the given analysis. Can anyone tell me if I can add an own metric which SonarQube uses to analyse my code? For example (I know this one already exists "Comments Balance"):
Tharmar's Metric 00 = "Commented Lines of Code" / "Lines of Code" - OR -Tharmar's Metric 01 = Count the word "Tharmar" used in "Lines of Code"
I tried to find something usefull in the documentation. But the only thing I found was about Manual Measures. I was able to create a new column within the analysis. (proved with the csv-plugin) Understandably, it contains no data.
Is there a way how I can tell SonarQube how to find that data? Or how to calculate that Metric with the given data.
Thanks for any help :)

Manual measure won't answer your need: they are only used to push "simple" data at project level.
What you need to do is to write your own SonarQube plugin to compute your own metrics. Here's the material that will be useful to you:
Coding a Plugin documentation
Sample Plugin that you can use to bootstrap your plugin. Most notably, check the Sensor and Decorator classes.

Related

Documentation on FluxAggregator or OCR

Is there any documentation on FluxAggregator or OCR? I thought the first one was properly documented previously. I would like to implement an Oracle as a POC for integration.
I was hinted at the Fluxmonitor, would I use one to query n api's and send the aggregate to a FluxAggregator contract address?
Having another process e.g. a requestor query last rounds value via the aggregators' function?
Would I use the last version v0.6?
FluxAggregator was the aggregation contract used prior to OffChainReporting(OCR). This is why the last version with FluxAggregator is in v0.6.
For Documentation of OCR Please see
https://docs.chain.link/docs/off-chain-reporting/
and
https://docs.chain.link/docs/jobs/types/offchain-reporting/
for more information.

Google cloud natural language API adding own context classifier

I have been searching how to create a new entity in google natural language API, and found nothing. Can anybody help how to create a new classifier such that if I pass a sentence and I want to detect suppose 'python' as programming language then how would I get that. Current the API is giving 'python' as 'other'.
I have also looked into cloud auto ml api for my solution and tried to create and train a model but It was only able to do sentiment analysis not entity detection.It was giving me the score rather than telling me that Java is programming language.
Thanks in advance.Your help will be appreciated.
Automl content classification classifies your data into the labels specified in the training set. It does not do entity detection. But it seems like what you need to do is closer to content classification than entity detection. My understanding from the description you provided is that you have content (may be words or phrases or short sentences) and you want to classify them into some labels (e.g. programmingLanguage). If you put together a good training set, the automl model should be able to do this.
The number it provides in eval is not sentiment, it's the probability of the predicted label. As you can see in the eval page you posted, it's telling you that java is a programmingLanguage with probability of 1 (so, it's very certain about it).

achartengine - timechart or linechart?

I would like to visualize with aChartEngine a series of measurements. For this I have double values ​​in which to compare the results and are currently still a string with time. Currently I use a line graph with the results and the number. I would now replace by the time the number. Unfortunately, I do not know how and find no suitable examples.
Edit
Okay, have found a good example and its work. But how i can make it flexible like the normal label?
There are plenty examples of using the AChartEngine APIs in the official demo application. See these instructions in order to figure out how to download the demo source code.

Sonar -LOC & Cyclomatic complexity

How does Sonar calculates software metrics particularly LOC and cyclomatic complexity? Does it uses any particular tools? IF yes, please also give the names.
For each supported language, a "squid" plugin is used to parse the source code and determine some base metrics such as LOC and complexity. How the complexity is calculated varies based on the plugin.
For example, here's the source code files for the JavaScript plugin: https://github.com/SonarCommunity/sonar-javascript/tree/master/javascript-squid/src/main/java/org/sonar/javascript/metrics
In this case, the complexity is calculated in the plugin itself using a very simple formula.
And here is the same set of classes for the C# support: https://github.com/SonarCommunity/sonar-dotnet/tree/master/sonar/csharp/sonar-csharp-squid/csharp-squid/src/main/java/com/sonar/csharp/squid/metric
The creation of metrics, though, can be done by any plugin, so you could write your own plugin if you wanted to supplement the data, or display the data in a different way.
Also take a look at the answer to this question (about creating a new plugin) by Fabrice, one of the .Net plugin maintainers: SonarQube - help in creating a new language plugin
You can browse http://docs.codehaus.org/display/SONAR/Metric+definitions for more details.

How do I see/debug the way SOLR find it's results?

Let's say I search for "ABLS" and the SOLR returns a result that to me does not make any sense.
How can I debug why SOLR picked this record to be returned?
debugQuery=true would help you get the detailed score calculation and the explanation for each scores.
An over view of the scoring is available at link
For detailed explaination of the debug information you can refer Link
You could add debugQuery=true&indent=true to the url and examine the results. You could also use the analysis tool in solr. Go to the admin and click analysis. You would need to read the wiki to understand either of these more in depth.
queryDebug will give you knowledge about why your scoring looks like it does (end how every field is relevant).
I will get some results that you are not understand and play with them with Solr's analysis
You should find it under:
/admin/analysis.jsp?highlight=on
Alternatively turn on highlighting over your results to see what is actually matching in your results
Solr queries are full of short parameters, hard to read and modify, especially when the parameters are too many.
And after it is even harder to debug and understand why a document is more or less relevant than another. The debug explain output usually is a three too big to fit in one page.
I found this Google Chrome extension useful to see Solr Query explain and debug in a clear manner.
For those who still use very old version of solr 3.X, "debugQuery=true" will not put the debug information. you should specify "debugQuery=on".
There are two ways of doing that. First is the query level, which means adding the debugQuery=on to your query. That will include a few things:
parsed query
debug timing information
detailed scoring information which helps you with analysis of why a give document is given a score.
In addition to that, you can use the [explain] transformer and add it to your fl parameter. For example ...&fl=*,[explain], which will result in your documents having the scoring information as another field.
The scoring information can be quite extensive and will include calculations done by the similarity algorithm. If you would like to learn more about the similarities and the scoring algorithm in Solr, have a look at this my and my colleague Radu from Sematext talk from the Activate conference: https://www.youtube.com/watch?v=kKocQdYGVJM

Resources