Let quality gate violation fail incremental analysis [closed] - sonarqube

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 10 months ago.
Improve this question
We're trying to set up a pull-request build pipeline that is triggered from Bitbucket, reports back failure when Sonarqube's code analysis reports some quality gate violation and ultimatively rejects the PR.
As far as I have read, the build breaker plugin, that should enable such a thing, is no longer supported in the most recent versions of Sonarqube, at least not in incremental / preview modes, since they now work database-less.
What are my alternatives for creating such a functionality? Sticking with 5.0?
Also, I figured that since quite some time Sonarqube can spit out Text / HTML reports for CI analysis - does this output quality gate violations as well or only all individual inspection results? Should one retrieve the former via API then? But I suspect this would require a full analysis, since it requires the results to be saved to the database, right?

There are two Bitbucket-related plugins to analyze pull requests. One for On Demand/Cloud and one for Server. Each will add comments to your pull request, and the On Demand version will approve a PR with no new issues.
Regarding your second question, the Issue Reports you're referring to contain only issues. In fact, it's not possible to calculate general Quality Gate compliance from a preview/incremental analysis since such analyses look only at issues, and Quality Gates can contain conditions on tests, duplications, &etc.

Related

How to create a JMeter Plugin [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I've been trying to figure out how to add on to the functionality of JMeter for a couple days, and I'm sort of stumped. I basically want to build a testing functionality of a proprietary DB (it's not too important on the specifics here). However, the issue I am encountering is where to even begin with the creation of the functionality.
I've tried various stuff on the JMeter website (an example) and the wiki (an example), but it all boils down to I can't seem to find a repository which I can pull into eclipse (or with just building with ant, I can't seem to download_jars because it can't connect to the repo listed in there). Is there any up to date resources on how to build a JMeter plug in? Or am I doing something wrong here because I am inexperienced in setting up something like this?
Any help is greatly appreciated, but please don't just link the first thing on google; I have done quite a bit of searching already. Thanks!
Edit: It turned out the reason I couldn't get eclipse working with a repo was due to the network restrictions I had to deal with. When I tried on another computer/network, it worked fine. I used this jmeter tutorial, but since it is out of date regarding the repository (they use SVN now), I used http://svn.apache.org/repos/asf/jmeter as the root using subclipse. In case anyone runs into the same problem I did.
I have also searched for a building jmeter plugin for my graph plugin stuff. I got a simple and good source code from Ruben laguna's blog. You can understand the basic structure and steps to create jmeter plugin.
Check out this:
Graph plugin - http://rubenlaguna.com/wp/better-jmeter-graphs/
Enhanced-jdbc-sampler - http://rubenlaguna.com/wp/enhanced-jdbc-sampler-for-apache-jmeter-22/

How to treat future requirements in terms of TDD [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
While attempting to adopt more TDD practices lately on a project I've run into to a situation regarding tests that cover future requirements which has me curious about how others are solving this problem.
Say for example I'm developing an application called SuperUberReporting and the current release is 1.4. As I'm developing features which are to be included in SuperUberReporting 1.5 I write a test for a new file export feature that will allow exporting report results to a CSV file. While writing that test it occurs to me that the feature to support exports to some other formats are slated for later versions 1.6, 1.7, and 1.9 which are documented in a issue tracking software. Now the question that I'm faced with is whether I should write up tests for these other formats or should I wait until I actually implement those features? This question hits at something a bit more fundamental about TDD which I would like to ask more broadly.
Can/should tests be written up front as soon as requirements are known or should the degree of stability of the requirements somehow determine whether a test should be written or not?
More generally, how far in advance should tests be written? Is it OK to write a test that will fail for two years until the that feature is slated to be implemented? If so then how would one organize their tests to separate tests that are required to pass versus those that are not yet required to pass? I'm currently using NUnit for a .NET project so I don't mind specifics since they may better demonstrate how to accomplish such organization.
If you're doing TDD properly, you will have a continuous integration server (something like Cruise Control or TeamCity or TFS) that builds your code and runs all your tests every time you check in. If any tests fail, the build fails.
So no, you don't go writing tests in advance. You write tests for what you're working on today, and you check in when they pass.
Failing tests are noise. If you have failing tests that you know fail, it will be much harder for you to notice that another (legitimate) failure has snuck in. If you strive to always have all your tests pass, then even one failing test is a big warning sign -- it tells you it's time to drop everything and fix that bug. But if you always say "oh, it's fine, we always have a few hundred failing tests", then when real bugs slip in, you don't notice. You're negating the primary benefit of having tests.
Besides, it's silly to write tests now for something you won't work on for years. You're delaying the stuff you should be working on now, and you're wasting work if those future features get cut.
I don't have a lot of experience with TDD (just started recently), but I think while practicing TDD, tests and actual code go together. Remember Red-Green-Refactor. So I would write just enough tests to cover my current functionality. Writing tests upfront for future requirements might not be a good idea.
Maybe someone with more experience can provide a better perspective.
Tests for future functionality can exist (I have BDD specs for things I'll implement later), but should either (a) not be run, or (b) run as non-error "pending" tests.
The system isn't expected to make them pass (yet): they're not valid tests, and should not stand as a valid indication of system functionality.

bug-tracker and wiki for project specifications [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
SHORT QUESTION
If you want to skip the details below, here's the short question:
I want to know if you keep your app's specifications similarly, in a bugtracker + a wiki and how do you split the information for good management. I look for a simple solution or just a point to start.
DETAILS
I need to keep track of the features for a web app that I want to build. So, I've used MediaWiki to gather a list of features.
For each feature, I have a wiki page where I include functional specs, technical specs and various related brainstormings in FreeMind format or plain text. Also, I include a series of open-questions related to it as TODOs and lots of images for various use cases. I found the wiki to be an excellent place where to keep all these.
I have a page in the wiki with all features transcluded manually so I can see them all in one page, in a specific format.
I also have a page in the wiki where I state what's the goal for v1.0, and a manually transcluded list of features for this version.
In the bug tracker(I use ClockingIT), I want to keep track of the tasks, bugs, etc. in order to build version 1.0 of the product.
ISSUE
Since I keep all features(well, the major features at least) in the wiki, I now feel the need to duplicate them in the bug tracker. Also, after brainstorming on V1.0, I realized there are many smaller features(that are too small to include in the wiki) that I'll need to keep track of in the bug tracker.
The problem is that I end up with 2 systems that will keep and manage the set of features and lots of duplicates will appear, like:
which features get in V1.0 -> this exists in the wiki as a page and the bugtracker keeps a milestone for this; moving a feature from v1.0 to another version, will mean to update both the bugtracker and the wiki
in the bugtracker I add comments as logs for what I do while working on the feature, so the wiki will get deprecated because some part of the brainstorming moves into the bugtracker
the wiki will tend to contain the major features while the bugtracker will contain minor features(like add a button to show/hide a section or something)...but the limit between major and minor features is very subjective and will tend to get messy so I'll have a hard time searching for a feature - I'll have to search in both wiki and bugtracker or I'll have to remember if it was a major or a minor feature
QUESTION
Since I need to keep the features in the wiki because it is an excellent tool for brainstorming, information keeping, etc. what should I include in the bugtracker? How can I efficiently separate these two tools' functionality so they integrate well with each-other and I don't get to duplicate any(or small amount of) data?
Thank you!
I use something mixed. On the wiki, there is a requirements page (among other pages), which describes the features and delivery data. Some features has separated topics where design/implementation details are explained. The requirements topic includes links to bugs with a short description of the bug/feature. Not all features are reflected as bugs. All bugs are listed in the current release section, if they are planned to be fixed in that release. A separated link goes to bug-tracking system to show all the bugs for the product (there are many products in the bug tracker). So:
all features, bug fixes are listed on the requirements page under a particular release;
bug tracking system contains bugs found during testing;
there is no exact definition which requests go to the bug tracker and which don't. An external reviewer may report feature requests as tickets in the bug tracker.
the key point is to keep all the requirements together and organized per release.
Maybe there are better ways to organize things, but this one is the simplest to me, and doesn't require lots of time.
You will need to customize for your needs but have you seen trac : http://trac.edgewall.org/.
This will serve many of your purpose. It combines bug tracker with wiki and other aspects.
Trac is an enhanced wiki and issue tracking system for software development projects. It provides an interface to Subversion (or other version control systems), an integrated Wiki and convenient reporting facilities.

Breaking the Build, Why is it a bad thing? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
When I started building a continuous integration server, I ran across the statement "It's bad to break the build [of the code]." After finishing that project I came to the conclusion that
"Breaking the build." was a catchy phrase that was being thrown around a lot because of the alliteration, or
I wasn't understanding a key element of Continuous Integration.
So my question is in the spirit of #2: why is breaking the build a bad thing?
Be very careful in labeling "Breaking the Build" as a bad thing. It is something that needs immediate attention, but it is also a very normal and expected part of the development cycle. This is why Continuous Integration is so useful -- it tells you immediately when the build is broken, and what change set caused it. It helps you get back on track quickly.
If your culture penalizes "Breaking the Build", then you are in danger of cultivating a toxic work environment. Again, consider it to be something that needs immediate attention, but don't label it as "bad".
Because if other people check out your broken changes, they won't be able to work, or if they do they will do so less efficiently.
It also means you're not properly testing your changes before you commit, which is key in CI.
From Martin Fowler http://martinfowler.com/articles/continuousIntegration.html
The whole point of working with CI is
that you're always developing on a
known stable base. It's not a bad
thing for the mainline build to break,
although if it's happening all the
time it suggests people aren't being
careful enough about updating and
building locally before a commit. When
the mainline build does break,
however, it's important that it gets
fixed fast.
Because if other people checkout the changes, they won´t be able to work...
This image is copyrighted to Geek & Poke under a Creative Commons License
It you break the build as has happend to me yesterday. When your team-mates try and use the sourcecode. It will not build. Therfore they will struggle to test the work that they are doing. It get worse the bigger your team.
Surely the whole point of continuous integration is to identify problems early. Daily or more frequent check ins are required to reduce conflicts to a manageable size.
You should get an up to date copy of the repository and build locally. This will tell you if your proposed check in will break the build. You resolve any issues and then check in.
In this way the integration issues are kept local and easy to fix.
Breaking the build has dire implications for the project schedule (and the blood pressure of team-mates)
=> Other developers who then get latest version can no longer build there own changes, delaying them
=> Continuous integration will break, meaning that formal testing can be delayed
Many version control tools (e.g. TFS) can prevent developers from checking in code which does not compile or pass unit or code analysis tests.
Once builds start breaking, people get reluctant to get the latest changes, and you begin the deadly spiral towards Big Bang integration of changes.
I don't think breaking the build is necessarily a bad thing, as long as there is a well-known, working branch or tag in the repository. That said, make your own branch in the repository if you know your code is going the break the build today, but you will fix it next week. Then later you can merge back into trunk.
Because it means someone has done something bad (or at least, some changes have clashed) and you can no longer build and deploy your system.
Breaking the build means that you committed code to a shared repository that either (a) does not compile, or (b) does not work (fails unit tests). Anyone else who's developing from that shared repository is going to have to deal with the broken code you committed until it is fixed. That will cause a loss of productivity for the entire team.

What are all of the automated build tasks that can be performed? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm curious. I'm looking into creating a CI server and wondering, after the first couple of obvious tasks, what else can an automated build do?
The tasks that I'm aware of (not in any order):
Compile (debug/release versions)
Code style conformance
Automated tests (unit/integration/etc.)
Code coverage
Version incrementing
Deployment
I'm not looking for the names of software, the build engine to use, or anything like that; just the repetitive and (maybe) important tasks that can be automated to make the build process ridiculously simple from an end-user perspective.
The simple answer to this, is basically anything that a script can be written for.
For example if you are using CruiseControl, anything that you can do from an ant script can be automated; and that includes calling other (not necessarily ant scripts as well).
That being said, you've got most bases covered in your initial list. To that I would add
Generation of documentation
Repository maintencnace and backup operations
Auto-update company website, e.g. whenever there's a new release of software, documentation is updated, etc
Reports, e.g. aggregate and summarise bug tracker issues and activity per project/ product
HTH
Building documentation
Building installers
Creating web sites
Initialising virtual images
Setting up databases
Reporting?
You may want to report the things you find during those tasks you outlined above. You could also do things such as duplication reporting, or if you run something like findbugs you could report on issues found (e.g. http://findbugs.sourceforge.net/bugDescriptions.html)
You could also generate a releasable package of the product in the build.
It all about automation. If you can find something that needs to be done, then automate it. For example you can do tonnes of code analysis, or testing. Ultiamtely it comes down to repeating things easily. Find what you need to do to improve quality and automate those(And I strongly fally down on the side of more testing is better).

Resources