I am using Spotbugs plugin within Eclipse IDE. I can run the Spotbugs over a whole project, which gives me the impression that the tool needs to build the project to present its analysis report.
But the documentation says that it's a static analysis tool.
So, I was curious if it requires to build the project, then can we call it a Static Analysis Tool?
And if it doesn't require to build the project, can we run Spotbugs on single .java files?
The meaning of static analysis is that it analyses your project files "at rest", as opposed to monitoring a running application. https://en.wikipedia.org/wiki/Static_program_analysis
Analyzing bytecode has both strengths and weaknesses compared to analysing source code. It's faster, and better suited to deep analysis of program flow, but won't pick up mistakes that get compiled away, like unnecessary imports and inconsistent-but-legal whitespace.
You can't properly run it on a single file, even if you compiled that file, because there are detectors that take multiple files into consideration, eg detecting when you try to pass null to a method whose parameters are annotated as non-null, or when you've defined a method as public and then never called it from outside the class.
Yes, since SpotBugs analyzes bytecode (.class files), you must first build the project (at least the part you want to analyze).
After that, you can analyze just a single file, for example in IntelliJ IDEA (still FindBugs plugin, but SpotBugs can do all that FindBugs could, same code base):
Related
Is there a way to have the spring-configuration-metadata.json file generated in the project files resources/META-INF (as opposed to target\classes\META-INF\spring-configuration-metadata.json) so that it can be pushed to version control on change?
Using spring-boot-configuration-processor dependency with maven
Quickly glancing through the source code of the relevant part of the configuration processor implementation it looks like its hard coded.
Since the spring-boot-configuration-processor works during the "compile" phase of maven, you can probably move the generated file by using other maven plugins (like ant run plugin, filtering probably and so on and so forth). And this should be a direct answer to your question
However, to be honest I don't think you should store this file in version control system for two main reasons:
This file is not a source code in the sense that you or your co-workers should edit it manually.
If someone from your team does refactoring in IDE it may accidentally change stuff in the file, so it will be hard to keep it in-sync. The current implementation makes sure that it will be generated during the compilation process so that it won't happen. The compilation time overhead is negligible.
So bottom line I believe it should be kept in the target folder
I'm trying to write a plugin for Intellij in Clojure. To that end I want to implement some extension endpoints with Clojure's :gen-class functionality. I've added the gradle-clojure plugin and placed some Clojure code in src/main/clojure. But when I build the project it says
> Task :compileClojure SKIPPED
Why is that?
Also, on a related note: If I add the expression (throw (Exception. "abort")) to the Clojure code on the top level, I can crash the build. This doesn't make sense to me. Why would Clojure code get executed during the build?
In Clojure, pre-compilation is not required. The source code can be compiled when running for the first time, as long as the source is bundled in the .jar file.
For gradle-clojure specifically, the default build task will run checkClojure, which will call the Clojure load function on each source directory, which loads all the namespaces. When you load a namespace, its expressions are executed in order. Normally you'd only put def or defn which would just define global variables. This is done to ensure there's no compiler errors before bundling in the .jar.
The gradle-clojure compileClojure task will only compile the namespaces that are configured with the aotNamespaces or all of them if using aotAll(). In that case it will call the Clojure compile on each namespace. See the gradle-clojure documentation for more info.
For more detail about Clojure compilation, see this documentation
I am working on migrating multi module java project into maven. Now for most of them i migrated to maven.
Finally i am aware my project have lot of unnecessary jars included, and i want to clean them up.
I know maven has plugin command, mvn dependency:analyze. Which works very well.
dependency:analyze analyzes the dependencies of this project and determines which are: used and declared; used and undeclared; unused and declared. based on static code analysis.
Now my question is that, how can i remove reported unused and declared dependency for cleanup purpose. It could be possible those jars were getting used at runtime and my code will compile perfectly fine after removing but blow up at runtime.
An example: mycode compile with one of opensource library antisamy.jar but it require batik.jar at runtime. And mvn dependency:analyze reports me to remove batik.jar.
IS my understanding correct or i need expert inputs here.
Your understanding seems to be correct.
But I'm not sure why you'd think that there is a tool that could cover all the bases.
Yes, if you use stuff by reflection, no tool can reliably detect the fact that you depend on this class or the other.
For example consider this snippet:
String myClassName = "com." + "example." + "SomeClass";
Class.forName(myClassName);
I don't think you can build a tool that can crawl through the code and extract all such references.
I'd use a try-and-fail approach instead, which would consist of:
remove all dependencies that dependency:analyze says are superfluous
whenever you find one that was actually used, you just add it back
This could work well because I expect that the number of dependencies that are actually used by reflection to be extremely small.
The code base I am working with has a lot of generated code. In addition, there are also some deprecated files that I would want to exclude from SonarQube analysis. I've read up the documentation and looked at some answers on here about that, but it does not help in my case.
I have a multi-module maven project. So I have multiple projects in my workspace that are all part of a large application. Say I want to exclude this file:
/home/username/workspace/com.mst.rtra.importing.message/bin/com/mst/rtra/importing/message/idl/parse/idlparser.java
I don't really know how to write this in the exclusions settings on SonarQube because of how long the filepath is. Also, what if I want to exclude another file, but from a different module, say :
/home/username/workspace/com.mst.rtra.interpreter.create/
I am confused about I should write this in the exclusions box in project settings. Should I write the absolute file path due to the multi-module nature of this project? Or is there some other convention used?
In addition, if I want to exclude generated files from analysis, I would need to put file:/generated-sources/ as I saw in another answer. However, after analysis, I can still view the analysis results of those files when I open up the project in SonarQube dashboard.
We use ant rather than maven, and an older version of the Sonar ant task at that. But what works for us is setting a sonar.exclusions property in our build.xml, which accepts wildcards for filenames. For example:
<property name="sonar.exclusions" value="**/com/ex/wsdl/asvc/*.java,**/com/ex/wsdl/bsvc/*.java"/>
That skips analyzing all the code generated from a wsdl file for two services. You ought to be able to do something similar for maven.
I need to write a Sonar plugin to keep track of the library classes that are used the most in a project.
So far I read the Coding a Plugin guide but I am a little bit confused. Does Sonar provide any facility to perform analysis (Something like parsing of Java code, creation of Abstract Syntax Trees, ...) or should I look for an external tool that does it and use Sonar only as a reporting tool?
Sonar provides a framework for publishing your own code analysis results into to Sonar so that they are in a single place. Although it does some analysis of it's own it mostly relies on other static code analysis tools and just integrates them into the lifecycle, e.g., test coverage can be implemented by cobertura or clover.
Sounds to me though like you just to get a measure of the Afferent couplings which can be configured for a single library. Not sure how you would manage it for cross library dependencies as most of the plugins work by using instrumenting the code at compile time which would not be possible for classes already in a jar.
If you just want to generate an AST then you should check out this question.