Has someone experiences with metrics for Design-By-Contract or can recommend metrics to measure the usage of Design-By-Contract in a code base? - metrics

We are currently introducing Design-by-Contract to a software development group of about 60 developers, which are developing different components. We started by defining Design-By-Contract policies for C# and Java. To measure the progress we are counting the number of classes and the number of contract assertions (Preconditions, post conditions and invariants) with a simple search for keywords (excluding comments and string literals). So we have two statistics:
Number of contract assertions per
component
Average number of contract assertions per class per component
Has someone experiences with metrics for Design-By-Contract or can recommend metrics to measure the usage of Design-By-Contract in a code base?

I think you first step should be code review of all new code that is checked in.
I can’t see an automated checking tool working until you have make it “normal” for all
your programmers to use “Design-By-Contract”
Maybe include the results of the code reviews on the form that is filled in as part of the process of deciding if a programmer will get a pay increase will help.

I would suggest looking on the contracts the same way you look at unit tests, try to measure the coverage of code by invariants and postconditions and number of checked arguments for preconditions.

Related

What influence the maintainability result for Sonarqube?

I'm confronted to a huge "spaghetti code" with known lack of documentation, lack of test covering, high complexity, lack of design rules to be follow, etc. I let the code be analysed by a default sonar-scan, and surprisingly for me, the maintability has a really great score with a technical debt of 1,1% ! Reality shows that almost each change introduce new bugs
I'm quite perplex, and wonder if some particularities in the implementation could explain this score... We have for example quite a lot of interfaces (feeling 4-5 Interfaces for 1 implementation), uses reflexion and service locator pattern.
Are there other indicator that I could use that would be eventually more relevant for improving the quality?
The maintainability rating is the product of the estimated time to fix all the issues of type Code Smell in your code base versus the estimated time to write the code in its current state.
You should also look at the Bugs and Vulnerabilities in the code base.
Regarding your specific points (and assuming we're talking about Java):
known lack of documentation - there is a rule in the default profile that looks for Javadocs. You might read its description and parameter values to see what it does an does not find.
lack of test coverage - there is currently a "hole" in this detection; if there is no coverage for a class, then the class is not taken into account when computing lines that could/should be covered, and therefore when calculating coverage percentages. It should be fixed "soon". The first steps will appear on the platform side in 6.2, but will need accompanying changes in the language plugins to take effect.
high complexity - there are rules for this. If they are not finding what you think they should, then take a look at their (adjustable) thresholds.
lack of design rules - the only rule that might address this (Architectural Constraint) is
deprecated
slated for removal
not on by default dropped from the latest versions of the plugin
use of reflection - there aren't currently rules available to detect this

Boundaries of acceptance tests

my application, among other things, uses some crawlers to read information exposed by a remote xml feed by another application (we're not responsible for this one). Crawled data is later displayed to the user.
The xml might contain simple data and links, that we follow if we need additional data.
The tests in our system are both unit tests, that test that we parse correctly the xml documents, and acceptance tests, that are meant to test what we display in our ui.
I was reasoning about the acceptance tests, and that's what this question is about.
Right now, for each acceptance test, we bring an embedded http server that serves some test data, that is specific for the test. We then start up our application, we crawl the test data and we verify the criteria for the test. While this approach has the advantage of testing the whole system from end to end, it has also the side effect of increasing the build time considerably each time we add a new acceptance test.
Is this the right approach for the acceptance tests?
I was wondering if, since the system that provides the feeds is an external one, wouldn't it be better to test the network communication layer and the crawlers at unit level and run the acceptance tests assuming the data has already been crawled?
I'd like to hear some thought from somebody else. :-)
Thanks!
Acceptance tests do tend to run slowly, require more setup and tend to be far more brittle than unit or integration tests. If you search the web for "test pyramid" you will find plenty of information on this. The general consensus is that you should have tests at the unit, integration and acceptance levels. With most tests being unit tests and just a few acceptance tests that do the end-to-end stuff. Often development teams will setup their ci servers to only run any long running acceptance tests during their nightly build processes so that they don't impact the performance of the unit test test runs.
I agree with what Andrew wrote, but wanted to add a different angle to the answer, one which I think is quite often missed in such discussions.
Your team is building a product and your company wants to get the best value for money from this endeavor.
At the beginning you might think that tests slow you down - your system is simple and everyone understands it so why waste time. It might feel like you get little value for money from writing tests. But this is obviously wrong if you adopt a longer term view of your product development. But I'll stop here, as I'm preaching to the converted.
However, if you adopt the same mindset when trying to answer your question, you will see that actually the answer depends a lot on your circumstances. I'm going to use a rather simplified math model to explain my thinking:
Let P(bug | test) denote a probability of a bug, provided you are running the test, let C(test) denote the cost of running the test and let C(bug) denote the cost of a bug.
If you focus on a particular bug*, you want to minimize the following:
P(bug | test_1)*C(bug) + C(test_1) ... P(bug | test_n)*C(bug) + C(test_n)
Where your suite consists of n tests.
If you were to disregard the test cost, clearly the more tests you have, the better right? But because tests need to be maintained, executed, etc., they have a non-zero cost. This means that you have a trade-off and in the end you are performing a U-curve optimization here (a bit like on this picture where they are trying to find the optimal tradeoff between release and holding costs).
The actual costs depend a lot on a particular domain, product areas and test types.
If you are in banking the cost of a bug can be enormous, so it will dwarf the test costs. But if you are writing a recommendation engine for music, having suggestions that are off for a few hours will not be a problem. Actually, in the latter case, you probably want the freedom to experiment with different algorithms and the ability to iterate quickly, so the cost of a test might over-shaddow the cost of a bug.
Lets say you work on a particular product. Even that is not homogenous. There will be areas of your product that are more critical than others. Take twitter for example, if a person could not tweet or load the tweets of who they follow it would be a big problem. On the other hand, if "who to follow suggestions" are empty, the impact on the product will be much smaller.
Finally, the cost of tests is not uniform either. But as I said earlier, it is not negligible and needs to be considered with care. I worked both in places where poor test coverage slowed the teams down because they lacked the confidence to push their changes to production, and in places where test runs were so long that people complained that they are constantly building and hardly working.
One last thing. It is good to build with resiliency to failure in mind - will lower the cost of bugs for you.

BDD and TDD, what is the correct workflow?

My understanding is such that:
BDD is the process of evaluating how software needs to behave, and then writing acceptance tests on which to base your code. You would write code using a TDD approach, by writing unit tests for methods and building your classes around the unit tests (code, test, refactor). When the code is written, you test it to see that is satisfies the original acceptance test.
Can anyone with experience of the entire comment on my interpretation and give a walk through of a simple application using these Agile principles? I see there is plenty of text on BDD and TDD in separate publications, but I am looking at how the two processes complement one another in real world development.
Try thinking of them as examples, rather than tests.
For the whole application, we come up with an example of how a user might use that application. The example is a specific instance of behaviour which illustrates that behaviour. So, for instance, we might say that a till application allows for refunds. A till operator who uses that till will be familiar with the scenario in which Fred brings back a microwave for a refund:
Given Fred bought a microwave for $100
When he brings the microwave back for a refund
Then he should get $100 refunded to his credit card.
Now it's easy to think of other scenarios too; for instance, the one where Fred got a discount and only gets $90 back, or the one where Fred had broken the microwave himself and we refuse his refund, etc.
When we actually start coding the till software, we break our code down into small pieces; classes, functions, modules, etc. We can describe the behaviour of a piece of code, and provide an example. So, for instance, we might say that a refund calculator should take discounts into account. This is only a small part of the refund scenario. We have a class, RefundCalculator, and a unit test with a method that says shouldTakeDiscountsIntoAccount.
We might put the steps for our example in comments, for instance:
// Given a microwave was sold at 10% discount for $100
...
// When we calculate the refund due
...
// Then the calculator should tell us it's $90.
...
Then we fill in the code to turn this into a unit test, and write the code that makes it pass.
Normally "BDD" refers to the scenario which describes the whole application, but the ideas actually started at a unit level, and the principles are the same. The only difference is that one is an example of a user using an application, and the other is an example of a class using another class (or function, or what have you). So BDD on the outside of an application is like ATDD (Acceptance-Test-Driven Development) and BDD for classes is like TDD. Hopefully this helps give you an idea of how the concepts hang together.
The only difference is that we got rid of the word "test", because we find it easier to ask people for an example than a test, and it helps keep people thinking about whether they understand the problem, rather than thinking about how to test a solution.
This answer on "top down" (or outside-in) vs. "bottom-up" may also help you.
Your summary is basically correct. The labels can be misleading: people who call what they do 'BDD' will write acceptance tests and unit tests, people who call what they do 'TDD' will write acceptance tests and unit tests. To me, the distinction between the two is much ado about nothing. You will read many people's experiences with different flavors of this basic process. Try the approaches that seem to make sense in your situation and always be ready to make adjustments based on what works and doesn't work for you - that's the essence of agile.
There are two approaches to BDD stories, imperative and declarative. A developer will likely find imperative stories easier to write especially when being used to script unit tests.
However when approaching this from Agile Test First/Test Driven Development approach a declarative approach results in BDD narratives that are cogent with the development stories. This is because the BDD narrative continues to reflect the domain language of the business rather than programming domain.
How do you capture requirements with declarative acceptance tests?

How to measure software development performance? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am looking after some ways to measure the performance of a software development team. Is it a good idea to use the build tool? We use Hudson as an automatic build tool. I wonder if I can take the information from Hudson reports and obtain from it the progress of each of the programmers.
The main problem with performance metrics like this, is that humans are VERY good at gaming any system that measures their own performance to maximize that exact performance metric - usually at the expense of something else that is valuable.
Lets say we do use the hudson build to gather stats on programmer output. What could you look for, and what would be the unintended side effects of measuring that once programmers are clued onto it?
Lines of code (developers just churn out mountains of boilerplate code, and other needless overengineering, or simply just inline every damn method)
Unit test failures (don't write any unit tests, then they won't fail)
Unit test coverage (write weak tests that exercise the code, but don't really test it properly)
Number of bugs found in their code (don't do any coding, then you won't get bugs)
Number of bugs fixed (choose the easy/trivial bugs to work on)
Actual time to finish a task based against their own estimate (estimate higher to give more room)
And it goes on.
The point is, no matter what you measure, humans (not just programmers) get very good at optimizing to meet exactly that thing.
So how should you look at the performance of your developers? Well, that's hard. And it involves human managers, who are good at understanding people (and the BS they pull), and can look at each person subjectively in the context of who/where/what they are to figure out if they are doing a good job or not.
What you do once you've figured out who is/isn't performing is a whole different question though.
(I can't take credit for this line of thinking. It's originally from Joel Spolsky. Here and here)
Do NOT measure the performance of each individual programmer simply using the build tool. You can measure the team as a whole, sure, or you can certainly measure the progress of each programmer, but you cannot measure their performance with such a tool. Some modules are more complicated than others, some programmers are tasked with other projects, etc. It's not a recommended way of doing this, and it will encourage programmers to write sloppy code so that it looks like they did the most work.
No.
Metrics like that are doomed to failure. Different people work on different parts of the code, on different classes of problem, and absolute measurements are misleading at best.
The way to measure developer performance is to have excellent managers that do their job well, have good specs that accurately reflect requirements, and track everyone's progress carefully against those specs.
It's hard to do right. A software solution won't work.
I think this needs a very careful approach when deciding the ways to measure developers performance as most the traditional methods such as line of codes, number of check ins, number of bugs fixed etc. are proven to be subjective with todays software engineering concepts. We need to value team performance approach rather measuring individual KPIs in a project. However working in commercial development environment its important to keep a track and a close look at following factors of individual developers;
Code review comments – Each project, we can decide the number of code reviews need to be conducted for a given period. Based on the code reviews individuals get remarks about their coding standard improvements. Recurring issues of code reviews of same individual’s code needs to be brought in to attention. You can use automated code reviews tools or manual code reviews.
Test coverage and completeness of tests. – The % covered needs to be decided upfront and if certain developer fails to attempt it often, it needs to be taken care of.
Willingness to sign in to complex tasks and deliver them without much struggle
Achieving what’s defined as “Done” in a user story
Mastery level of each technical area.
With agile approach in some of the projects, the measurements of the development team and the expected performance are decided based on the releases. At each release planning there are different ‘contracts’ negotiated with the team members for the expected performance. I find this approach is more successful as there is no reason of adhering to UI related measurements in a release where there is a complex algorithm to be released.
I would NOT recommend using build tool information as a way to measure the performance / progress of software developers. Some of the confounding problems: possibly one task is considerably harder than another; possibly one task is much more involved in "design space" than "implementation space"; possibly (probably) the more efficient solution is the better solution, but that better solution contributes less lines of code than a terribly inefficient one which provides many many more lines of code; etc.
Speaking of KPI in software developers. www.smartKPIs.com may be a good resource for you. It contains a user friendly library of well-documented performance measures. At the moment it lists over 3300 KPI examples, grouped in 73 functional areas, as well as 83 industries and sub-categories.
KPI examples for the software developers are available on this page www.smartKPIs.com - application development They include but not limited to:
Defects removal efficiency
Data redundancy
In addition to examples of performance measures, www.smartKPIs.com also contains a catalogue of performance reports that illustrate the use of KPIs in practice.
Examples of such reports for information technology are available on: www.smartKPIs.com - KPIs in practice - information technology
The website is updated daily with new content, so check it from time to time for additional content.
Please note that while examples of performance measures are useful to inform decisions, each performance measure needs to be selected and customized based on the objectives and priorities of each organisation.
You would probably do better measuring how well your team tracks to schedules. If a team member (or entire team) is consistantly late, you will need to work with them to improve performance.
Don't short-cut or look for quick and easy ways to measure performance/progress of developers. There are many many factors that affect the output of a developer. I've seen alot of people try various metrics ...
Lines of code produced - encourages developers to churn out inefficient garbage
Complexity measures - encourages over analysis and refactoring
Number of bugs produced - encourages people to seek out really simple tasks and to hate your testers
... the list goes on.
When reviewing a developer you really need to look at how good their work is and define "good" in the context of what the comany needs and what situations/positions the company has put that indivual in. Progress should be evaluated with equal consideration and thought.
There are many different ways of doing this. Entire books written on the subject. You could use reports from Hudson but I think that would lead to misinformation and provide crude results. Really you need to have task tracking methodology.
Check how many lines of the codes each has written.
Then fire the bottom 70%.. NO 90%!... EVERY DAY!
(for the folks that aren't sure, YES, I am joking. Serious answer here)
We get 360 feedback from everyone on the team. If all your team members think you are crap, then you probably are.
There is a common mistake that many businesses make when setting up their release management tool. The Salesforce release management toolkit is one of the best ones available in the market today, but if you do not follow the vital steps of setting it up, you will definitely have some very bad results. You will get to use it but not to its full capacity. Establishing release management processes in isolation from the business processes is one of the worst mistakes to make. Release management tools go hand in hand with the enterprise strategy, objectives, governance, change management plus some other aspects. The processes of release management need to be formed in such a way that everyone in the business is on the same page.
Goals of release management
The main goal of release management is to have a consistent set of reliable and repeatable processes that are resource independent. This enables the achievement of the most favorable business value while at the same time optimizing the utilization of resources available. Considering that most organizations focus on running short, high-yield business projects, it is essential for optimization of the delivery value chain of the application to make certain that there are no holdups in the delivery of the business value.
Take for instance the force.com migration toolkit, as this tool has proven to be great in governance. A release management tool should allow for optimal visibility and accountability in governance.
Processes and release cycles
The release management processes must be consistent for the whole business. It is necessary to have streamlined and standardized processes across the various tool users. This is because they will be using the same platform and resources that enable efficient completion of their tasks. Having different processes for different divisions of your business can lead to grievous failures in tool management. The different sets of users will need to have visibility into what the others are doing. As aforementioned, visibility is of great importance in any business process.
When it comes to the release cycles, it is also imperative to have one centralized system that will track all the requirements of the different sets of users. It is also necessary to have this system centralized so that software development teams get insight into the features and changes requested by the business. Requests have to become priorities to make sure that the business gets to enjoy maximum benefit. Having a steering team is important because it is involved in the reviewing of business requirements plus also prioritizing the most appropriate changes that the business needs to make.
The changes that should happen to the Salesforce system can be very tricky and therefore having a regular meet up between the business and IT is good. This will help to determine the best changes to make to the system that will benefit the business. By considering the cost and value of implementing a feature, the steering committee has the task of deciding on the most important feature changes to make.
Here also good research http://intersog.com/blog/tech-tips/how-to-manage-millennials-on-software-development-teams
This is an old question but still, something you can do is borrow Velocity from Agile Software Development, where you assign a weight to each task and then you calculate how much "weight" you solve in each sprint (or iteration or whatever DLC you use). Of course this comes in hand with the fact that, like a commenter mentioned before, you need to actively keep track yourself of whether your developers are working or chatting online.
If you know your developers are working responsively, then you can rely on that velocity to give you an estimate of how much work the team can do. If at any iteration this number drops (considerably), then either it was poorly estimated or the team worked less.
Ultimately, the use of KPIs together with velocity can give you per-developer (or per-team) insights on performance.
Typically, directly using metrics for performance measurement is considered a Bad Idea, and one of the easy ways to run a team into the ground.
Now, you can use metrics like % of projects completed on-time, % of churn as code goes toward completion, etc...it's a wide field.
Here's an example:
60% of mission-critical bugs were written by Joe. That's a simple, straightforward metric. Fire Joe, right?
But wait, there's more!
Joe is the Senior Developer. He's the only guy trusted to write ultra-reliable code, every time. He's written about 80% of the mission-critical software, because he's the best.
Metrics are a bad measurement of developers.
I would share my experience and how I learnt a very valuable process on measuring the team performance. I must admit, I have fallen on tracking KPI simply because most of the departments would do the same but not really for the insight until I had the responsibility to evaluate developers performance where after a number of reading I evolved with the following solution.
One every project, I would entertain the team in a discussion on the project requirements and involve them so everyone knows what is to be done. In the same discussion through collaboration we would break the projects in to tasks and weight those tasks. Now previously we would estimate the project completion as 100% where each task has a percentage contribution. Well this did work for a while but was not the best solution. Now we would based the task on weight or points to be exact and use relative measurements to compare the task and differentiate the weights for example. There is a requirement to develop a web form to gather user data.
Task would go about like
1. User Interface - 2 Points
2. Database CRUD - 5 Points
3. Validation - 4 Points
4. Design (css) - 3 Points
With this strategy We can pin point a weekly approximate on how much we have completed and what is pending on the task force. We can also be able to pin point who has performed best.
I must admit that I still face some challenges on this strategy such as not every developer is comfortable on every technology. Somehow some are excited to learn technologies simply because they find 2015 high % of points fall in that section some would do what they can.
Remember, do not track a KPI for their own sake, track it for it's insight.

Calculate code metrics [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
Are there any tools available that will calculate code metrics (for example number of code lines, cyclomatic complexity, coupling, cohesion) for your project and over time produce a graph showing the trends?
On my latest project I used SourceMonitor. It's a nice free tool for code metrics analysis.
Here is an excerpt from SourceMonitor official site:
Collects metrics in a fast, single
pass through source files.
Measures metrics for source code
written in C++, C, C#, VB.NET, Java,
Delphi, Visual Basic (VB6) or HTML.
Includes method and function level
metrics for C++, C, C#, VB.NET,
Java, and Delphi.
Saves metrics in checkpoints for
comparison during software
development projects.
Displays and prints metrics in
tables and charts.
Operates within a standard Windows
GUI or inside your scripts using XML
command files.
Exports metrics to XML or CSV
(comma-separated-value) files for
further processing with other tools.
For .NET beside NDepend which is simply the best tool, I can recommend vil.
Following tools can perform trend analysis:
CAST
Klocwork Insight
Sonar is definitively a tool that you must consider, especially for Java projects. However it will also handle PHP or C/C++, Flex and Cobol code.
Here is a screenshot that show some metrics on a project:
alt text http://sonar.codehaus.org/wp-content/uploads/2009/05/squid-metrics.png
Note that you can try the tool by using their demo site at http://nemo.sonarsource.org
NDepend for .net
I was also looking for a code metrics tool/plugin for my IDE but as far as I know there are none (for eclipse that is) that also show a graph of the complexity over a specified time period.
However, I did find the eclipse metrics plugin, it can handle:
McCabe's Cyclomatic Complexity
Efferent Couplings
Lack of Cohesion in Methods
Lines Of Code in Method
Number Of Fields
Number Of Levels
Number Of Locals In Scope
Number Of Parameters
Number Of Statements
Weighted Methods Per Class
And while using it, I didn't miss the graphing option you are seeking as well.
I think that, if you don't find any plugins/tools that can handle the graphing over time, you should look at the tool that suits you most and offers you all the information you need; even if the given information is only for the current build of your project.
As a side note, the eclipse metrics plugin allows you to export the data to an external file (link goes to an example), so if you use a source control tool, and you should!, you can always export the data for the specific build and store the file along with the source code, that way you still have a (basic) way to go back in time and check the differences.
keep in mind, What you measure is what you get. loc says nothing about productivity or efficency.
rate a programmer by lines of code and you will get.. lines of code.
the same argument goes for other metrics.
otoh.. http://www.crap4j.org/ is a very conservative and useful metric. it sets complexity in relation with coverage.
NDepend, I am using it and its best for this purpose.
Check this :
http://www.codeproject.com/KB/dotnet/NDepend.aspx
Concerning the tool NDepend it comes with 82 different code metric, from Number of Lines of Code, to Method Rank (popularity), Cyclomatic Complexity, Lack of Cohesion of Methods, Percentage Coverage (extracted from NCover or VSTS), Depth of Inheritance...
With its rule system, NDepend can also find issues and estimates technical debt which is an interesting code metric (amount of dev-effort to fix problems vs. amount of dev-time spoiled per year to let problems unfixed).
All these metrics are detailled here.
If you're in the .NET space, Developer Express' CodeRush provides LOC, Cyclomatic Complexity and the (rather excellent, IMHO) Maintenance Complexity analysis of code in real-time.
(Sorry about the Maintenance Complexity link; it's going to Google's cache. The original seems to be offline ATM).
Atlassian FishEye is another excellent tool for the job. It integrates with your source control system (currently supports CVS, SVN and Perforce), and analyzes all your files that way. The analysis is rather basic though, and the product itself is commercial (but very reasonably priced, IMO).
You can also get an add-on for it called Crucible that facilitates peer code reviews.
For Visual Studio .NET (at least C# and VB.NET) I find the free StudioTools to be extremely useful for metrics. It also adds a number of features found in commercial tools such as ReSharper.
Code Analyzer is simple tool which generates this kind of metrics.
(source: teel.ws)
For Python, pylint can provide some code quality metrics.
There's also a code metrics plugin for reflector, in case you are using .NET.
I would recommend Code Metrics Viewer Exention for visual studio.
It is very easy to analyze solution at once, also do comparison if you made progress ;-)
Read more here about the features
On the PHP front, I believe for example phpUnderControl includes metrics through phpUnit (if I am not mistaken).
Keep in mind that metrics are often flawed. For example, a coder who's working on trivial problems will produce more code and there for look better on your graphs, than a coder who's cracking the complex issues.
If you're after some trend analysis, does it really mean anything to measure beyond SLOC?
Even if you just doing a grep for trailing semi-colons and counting the number of lines returned, what you are after is consistency in the SLOC measurement technique. In this way today's measurement can be compared with last month's measurement in a meaningful way.
I can't really see what would a trend of McCabe Cyclometric Complexity give? I think that CC should be used more for a snapshot of quality to provide feedback to the developers.
Edit: Ooh. Just thought of a couple of other measurements that might be useful. Comments as a percentage of SLOC and test coverage. Neither of which you want to let slip. Coming back to retrofit either of these is never as god as doing them "in the heat of the moment!"
HTH.
cheers,
Rob
Scitools' Understand does have the capability to generate a lot of code metrics for you. I don't have a lot of experience with the code metrics features, but the static analysis features in general were nice and the price was very reasonable. The support was excellent.
Project Code Meter gives a differential development history report (in Excel format) which shows your coding progress metrics in SLOC, time and productivity percentage (it's time estimation is based on cyclomatic complexity and other metrics). Then in Excel you can easily produce the graph you want.
see this article which describes it step by step:
http://www.projectcodemeter.com/cost_estimation/help/FN_monsizing.htm
For Java you can try our tool, QualityGate that computes more than 60 source code metrics, tracks all changes through time and also provides an overall rating for the maintainability of the source code.

Resources