Using sonar in pretty big team - sonarqube

We have something about 20 people in our team and we are using sonar for now to analyse new code before submiting it to the main stream. So each designer uses it's own Sonar installed on his machine.
What I'm trying to do is to create a one instance of the Sonar which each designer will be able to use. The only concern I have is what will happen if:
One designer will launch analysis on one revision of file and right after that the second designer will launch analysis on another revision of this file (in the worst case we can have a bunch of such a files). First designer won't be able to see his violations and won't be able to see code he wrote at all. Do we have some mechanism to overcome this?
What will happen if two designers will analyse the same project at the same time? AFAIK, Sonar won't allow them to do so. Any workaround for this?
Of course, we can, somaehow, create a project on the sonar side for each team member, but this has it's drawbacks, such as issues, marked as false positive in one proect won't appear as such an issues in another project and so on.
Any ideas on such an issues?

What you probably want to set up is:
a central Sonar instance that analyses the code base on a regular basis (for instance every day) based on the code located in the repository. This instance should be the reference and the project manager(s) will use it to monitor the project.
ask the developers to run local analyses before commiting their code:
either using Sonar Eclipse if you're coding in Java, C++ or Python. Everything is perfectly described in the documentation, more precisely the "Checking code prior to commit" section
or using the Issues Report plugin if your language is not supported yet in Sonar Eclipse.

Related

SonarQube Quality Profiles are not being used during the sonarqube scan

SonarQube Quality Profiles are not being used during the sonarqube scan:
We have sonar tasks installed and enabled for build definition what we are seeing is that the quality profiles are being stopped for one build run and it is again started using the quality profiles for the next run automatically. We are consistently seeing the same behavior for the alternate build runs.
Image where we can see the profiles are stopped and started:
What you're seeing is the result of a bad configuration somewhere. You indicate your comments that along with the toggling of profiles, you also see large swings in issue counts
as most of the file types get excluded from analysis resulting in very few issues reported.
So let's break this down:
The profile events you're seeing simply record/reflect the changes in profile use from one analysis to another. If I have a project with Java and JavaScript, the first analysis will use the default profiles for that language. Then, let's say I use the deprecated property sonar.language to restrict analysis to just Java files and analyze again. Since JavaScript is no longer found in my project, the default JavaScript profile will not be used, and a profile event will be recorded.
That seems to be what you're seeing in your activity log.
So now to the detective work: why is this happening? First, this swings back and forth. That indicates configurations set not at the project level (in SonarQube itself) but properties that are only sometimes passed during the analysis, or some other analysis-side circumstance. There are a few possible causes which you'll need to investigate independently:
sonar.language - if this deprecated property is used during analysis, it will limit the by-default multi-language analysis to a single language. It could be in your properties files or passed on the analysis command line -Dsonar.language=foo
exclusions - exclusions are difficult to set properly from the analysis side, but this can happen
improper/incomplete checkout - is it possible that only part of your project is checked out?
In investigating this, you should be aware that analysis-side properties can be set at two levels; at the individual project/analysis or in the global scanner configuration.
I'm guessing that your CI system has multiple slaves and languages are dropped - or not - from your project depending on which slave the job lands on that night.
I was experiencing a similar problem and I believe I have tracked down the root of the issue for us. Capturing the source code and properties used during the build, I was unable to find any discrepancies in sonar properties or source code collected, as suggested in the previous answer.
Our solution we were trying to analyze with SonarQube has a few C# projects that are part of an external core solution, which is shared between several components. Compiling any one of these other components requires that core be compiled first. I believe the issue lies in that when compiling my component to be analyzed, MsBuild would sometimes rebuild some of the projects included in core. Depending on if they were rebuilt or not, the number of projects and files (and hence issues) would swing wildly in one direction or the other.
By enforcing MsBuild to clean my component solution before compiling,
MsBuild.exe MySolution.sln /t:Clean,Build
I ensure a consistent set of projects are built and analyzed by sonarqube. I am 30 builds in with this new approach and I have no more flopping back and forth between using and not using a quality profile.

Is interaction between a widget plugin and SonarQube server to do analysis possible?

Specifically, I want to run some analysis on the issues generated and rules violated. So I want to design the system to have a few click actions in the web interface of sonar and initiate the analysis in the back-end. The reason for this is that the analyses are run only during some cases and I don't want the instance with my plugin to have additional load during each run.
Also if possible could you point me in the right direction?
I couldn't find anything on here. http://docs.sonarqube.org/display/DEV/Web+API
Please note I have been searching this very recently and am also new to SonarQube.
SonarQube server is for performing analysis only. What I was looking was for the server to give the user access to control the analysis. But using independent plugins to do static analysis could work.
I solved this issue by splitting the plugin into
A plugin for doing inline analysis work (if a need for doing our own static analysis arises)
A WebApp to classify issues, listing them by projects, etc.
This approach seemed more attractive when after discussion, we decided that all functions we wanted out widget to do, our WebApp itself could do. From SonarQube version 6.2, the ruby APIs are going to be deprecated and so moving to a REST-based approach would give an enduring solution.

Publish a specific revision using CruiseControl.Net

I am setting up a CruiseControl.Net server. So far, it only builds a project (.Net website), and I kind-of know how to set up unit testing, code coverage, etc in the future.
What I will need to have soon is this:
The developers commit changes to SVN continually, thus CCNet builds often.
CCNet will publish the latest version to the development server, as soon as a commit is validated (with unit tests etc).
The project manager validates a specific version, in order to publish it to the pre-production server, and create a SVN tag from this revision.
The last point is where my problem lies: how exactly can I set up things so the project manager can, for instance, browse to the CCNet web dashboard, select a previous specific build, and says "this is the build I want to publish" ?
I believe that my thinking is flawed somewhere, but I can't put my finger on it. Maybe CCNet is not the right place to do these manipulations ?
In my mind, I can create a SVN tag using CCNet, and mostly work from the trunk, but maybe I can't ? Maybe it's the other way around, and I should add a CCNet project every time a tag is created under SVN ?
The final goal is that I want to automate the publication process: zip creation (for archiving), web.config modification (using Nant for instance), and website publication (using FTP).
In all these steps, I want to limit the manual intervention to the maximum. If I can avoid to add a new project to CCNet every time a tag or branch is created in SVN, that would be awesome.
Thanks for your help, and sorry if it's not very easy to read, but it's not very clear in my head either...
Since you can create any task, you should be able to achieve the goal, though unfortunately not out-of-the-box.
Since you use SVN, it all depends actually on revision. I think I'd create a separate project for your third scenario and added a parameter where PM would provide revision number. Then based on that I'd tag sources etc. in my own task.
Regarding the other points, I think this is similar. Recently for web projects we started using MSDeploy, and in each stage build the MSDeploy package was created. Then there was a separate build called Deploy, that when forced allows us to select which package we want to deploy using MSDeploy.
Having several environments, however, started a little bit like overkill for managing with CCNet, and I'll be looking into kwakee at some time.

Automated Software Versioning integrated with Issue Control System

I decided to use the following pattern after reading semantic versioning at http://semver.org/. However, I have some unsolved issues in my mind in terms of automaticng and integrating SDLC tools.
Version Pattern:
major.minor.revision.build
Such that;
Major: major changes, should be increamented manually.
Minor: minor changes, should be increamented automatically, whenever a new feature or an enhancement to existing feature is solved in issue tracking system.
Revision: changes not affecting the minor changes, should be increamented automatically, whenever a bug is solved in issue tracking system.
Assume that developers never commit the source unless an issue has been solved in issue tracking system, and the issue tracking system is JIRA in this configuration. This means that there are bugs, improvements, and new features as issue types by default, apart from the tasks.
Furthermore, I am adding a continous integration tool in this configuration, and assume that it is bamboo (by the way, I never used bamboo before, I used Hudson), and I am using Eclipse IDE with mylyn plugin and plus the project is a Maven project (web).
Now, I want to elucidate what I want to do by illustrating following scenario. Analyst (A) opens an issue (I), which is a new feature, related to Maven project (P). As a developer (D), I receive an email about the issue, and I open the task via Mylyn interface in Eclipse. I understand and develop the new feature related to issue (I). Consider, I am a Test Driven Development oriented developer, thus I wrote the Unit, DBUnit, and User-Acceptance (for example using Selenium) tests correspondingly. Finally, I commit the changes to the source control. I think the rest should be cycled automatically but I don't know how can I achieve this? The auto-cycled part is the following:
The Source Control System should have a post-hook script that triggers the Continous integration tool to build the project (P). While building, in the proper phase the test code should be run, and their reports generated. The user-acceptance test should be performed in a dedicated server (For example, jboss, or Tomcat). The order of this acceptance test should be, up the server, run the UA test, then generate the UA test reports and down the server. If all these steps have been successfuly completed, the versioning should be performed. In versioning part, the Maven plugin, or what so ever, should take the number of issues solved from the Issue Tracking System, and increment the related version fragments (minor and revision), at last appends the build number. The fragments of the version may be saved in manifest file in order to show it in User Interface. Last but not the least, the CI tool should deploy it in Test environment. That's all auto-cycled processes I want.
The deployment of the artifact to the production environment should be done automatically or manually?
Let's start with the side question: On the automatic deployment to production, this requires the sign off of "the business" whomever that is. How good do your tests need to be to automatically push to production? Are they good enough that you trust things to just go live? What's your downtime? Is that acceptable? If your tests miss something, can you rollback? Are you monitoring production so you know if you've introduced problems? Generally, the answers to enough of these questions is negative enough that you can't auto-deploy there as the result of a build / autotest event.
As for the tracking, you'll need a few things. You'll need all your assumptions to be true (which I doubt they are, but if you get there that's awesome). You'll also need a build number that can be incremented after build time based on test results. You'll need source changes to be annotated with bug ids. You'll need the build system to parse the source changes and make associations with issues. You'll need an API into the build system so you can get the count of issues associated with the build. Finally you'll need your own bit of scripting to do the query and update the build number accordingly.
That's totally doable, but is it really worth having? What's the value you attach to the numbering scheme?

What does CruiseControl (or any other CI tool) give more than well-written (n)Ant?

We have a large collection of nAnt scripts that build our various products. They almost all have the following structure:
Erase old working copy.
Check out complete fresh copy from version control.
Increment build number in appropriate file (custom nAnt task).
Run static analysis (StyleCop, Perl scripts)
Build solution using Visual Studio - ends up with MSI output.
Run unit tests (nUnit, JSUnit)
Run static analysis (FxCop)
Zip up deliverables (MSI, readme, etc) into well-named package.
Put this zip package onto a server share.
Email results to team.
From our research, it seems that CruiseControl(.net?)/Hudson/BuildBot would only add the trigger that causes the build, which at the moment is double-clicking the nAnt script over Remote Desktop and a status dashboard.
Are we missing anything else significant?
The question is subjective, and thus so is my answer.
In the projects I've automated before, CruiseControl was used essentially for that one purpose: so we didn't have to remote into the build machine and trigger builds. The CI part is that CruiseControl will monitor the repository for you, triggering builds at the intervals you define.
It also gave us the dashboard from which could trigger releases, or go back to examine logs and artefacts from past builds.
For us that was enough benefit to implement CruiseControl. Perhaps it doesn't "seem" like much until you've finished it and a month later realized you haven't had to touch your build system because it's off silently and thanklessly doing its thing for you.
A Continuous Integration server such as Hudson would do 1, 2, 3, 9 and 10 for you so that you don't have to implement them yourself. If you've already got it working that's maybe not a huge improvement for your current project but it makes things simpler for subsequent projects. It would also, as you mention, take care of when to trigger the build.
Hudson will also chart various trends over time, such as test coverage, build time, static analysis results. You can also have more sophisticated notifications than just e-mail if you choose.
The most important thing it gives you is visual feedback (the bigger the screen is better). When you have one machine, dedicated to displaying buildresults, visible to all team members, it works like a catalyst to people see that something is wrong and fixes it.
If you have something like that standing in a place where your boss can see it and ask you "Hey Wilkinson, why is this screen red?" will you fix your build faster?
Thay all look the same, you can pick whatever you think fits your needs, just have one setup and running.

Resources