How would you measure code "quality" across a large project [closed] - software-quality

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 11 months ago.
Improve this question
I'm working on a quite large project, a few years in the making, at a pretty large company, and I'm taking on the task of driving toward better overall code quality.
I was wondering what kind of metrics you would use to measure quality and complexity in this context. I'm not looking for absolute measures, but a series of items which could be improved over time. Given that this is a bit of a macro-operation across hundreds of projects (I've seen some questions asked about much smaller projects), I'm looking for something more automatable and holistic.
So far, I have a list that looks like this:
Code coverage percentage during full-functional tests
Recurrance of BVT failures
Dependency graph/score, based on some tool like nDepend
Number of build warnings
Number of FxCop/StyleCop warnings found/supressed
Number of "catch" statements
Number of manual deployment steps
Number of projects
Percentage of code/projects that's "dead", as in, not referenced anywhere
Number of WTF's during code reviews
Total lines of code, maybe broken down by tier

You should organize your work around the six major software quality characteristics: functionality, reliability, usability, efficiency, maintainability, and portability. I've put a diagram online that describes these characteristics. Then, for each characteristic decide the most important metrics you want and are able to track. For example, some metrics, like those of Chidamber and Kemerer are suitable for object-oriented software, others, like cyclomatic complexity are more general-purpose.

Cyclomatic complexity is a decent "quality" metric. I'm sure developers could find a way to "game" it if it were the only metric, though! :)
And then there's the C.R.A.P. metric...
P.S. NDepend has about ten billion metrics, so that might be worth looking at. See also CodeMetrics for Reflector.
D'oh! I just noticed that you already mentioned NDepend.
Number of reported bugs would be interesting to track, too...

If your taking on the task of driving toward better overall code quality. You might take a look at:
How many open issues do you currently have and how long do they take to resolve?
What process to you have in place to gather requirements?
Does your staff follow best practices?
Do you have sop's defined to describing your companies programming methodology.
When you have a number of developers involved in a large project everyone has their way of programming. Each style of programming solve the problem but some answers may be less efficient than others.
How do you utlize you staff when attacking a new feature or fixing the exist code. Having developers work in teams following programming sop's forces everyone to be a better code.
When your people code more efficiently following rule you development time should get quicker.
You can get all the metrics you want but I say first you have to see how things are being done:
What are you development practices?
Without know how things are currently being done you can get all the metrics you want but you'll never see any improvemenet.

Amount of software cloning/duplicate code, less is obviously better. (Link discusses clones and various techniques to detect/measure them.)

Related

How you assure software's code quality? Is it worth it in an agile environment? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
First of all sorry for the codeless question, but I like to clarify one thing.
I have a senior developer in a team who is actively pushing for code quality - merge request reviews, no crappy code and similar. But most of the other guys in the team has - get shit done mentality. Me as a business guy, I do not check the code at all, but If I would not have that guy who cares about the quality - in my opinion we would hit some heavy refactoring cycles at some point.
But of course there is a downside for caring carefully about the quality too much - it just takes time to do that. Maybe we will have to throw a lot of beautiful code when we will have to pivot as business needs changes.
Two questions: a) how do you keep the quality of your product? What practices do you use? b) Where is the line of caring about code quality enough (not too less and not too much)?
Code quality is important independent from whether you develop agile or not. You are absolutely right that quality improvement requires additional time. Most people fail because they spent their time in larger blocks ('refactoring projects') to more or less clean up code in arbitrary places with the objective of reducing the number of quality issues as much as possible.
The process I advise to use is to follow the boy-scout rule of always leaving the code that is changed a bit cleaner (better) than it was before. That means whenever you change a function, procedure, method or other code unit, fix the quality problems in it. The advantage is that you already understood the code (because you had to change it anyways) and it is going to be tested (because you need to test your original change). That means the additional effort for quality improvement (adding a comment, improving identifiers, removing redundancy, ...) is very low. In addition, you are improving only code that you are working with and don't waste time improving code that is never touched anyway.
Following the boy-scout rule ensures that the quality does not decrease, but instead steadily increases over time. It is also a reasonable level of 'caring'. I wrote some more about this here. In addition you need a good quality analysis tool like Teamscale that can reliably differentiate between legacy problems, new problems and problems in the code you recently changed.
Get and enforce a good unit testing framework and back it up with automated integration testing.
As for Agile I found the morning 10 minute scrums to be useful, but the end-of-sprint meetings tended to be long.
I made good experience with Sonar Qube a tool we are using for static code analysis. So we can keep track of
code smells, etc. in our code base. Another point is, that fixing issues can be planed in sprints! An IDE integration is available as well!

How to estimate the contribution of an individual to a software project? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I work on a software project and would like to estimate the percentage out of the total contribution that I have put in the development of the software. Is there some tool doing this? Such a tool can be useful for appraisals or negotiations, for example. After all, we work for money (yes, not only money, put the point remains). I think there is enough hand-waving for the most important things.
The estimation is very subjective (at least to me now) but I do not know of any tool that provides even a subjective estimate. I know of Sloccount that spells out the total effort using the lines of code but not on per-developer basis.
My idea of an ideal tool for this purpose would:
measure the complexity of the code (more complex is more effort, but more effort is not necessarily more contribution)
measure the decomposibility/flexibility of the software (more decomposable is better)
how much library code is used -- using library code speeds up the development process, increases the associated risk and requires the developer to know from before or learn about the library.
be intelligent enough to differentiate between "who wrote the code", "who copied the code" and "who indented the code".
It is difficult to differentiate between the complexity in the implementation and the intrinsic complexity of the problem. Perhaps a comparison can be made with an equivalent open source counterpart if there is, or for each submodule separately.
If there is no such tool, is there no merit in having such a tool? Or do you believe in "I do work, I do not measure"? It takes time after all. Perhaps the project manager should do this estimation continuously, say, weekly. Are there any standards? Yes, standardization is difficult because every project has different goals, but perhaps that should mean there should be multiple standards, not no standards at all. This looks similar to the how a company is valued in the market.
Update: after seeing a few initial answers: It does not make sense to imagine a tool that just outputs the percentages. Are there tools that can help humans (particularly managers) in making better decisions? Or what is the sufficient statistic for making better decisions? Are these statistics available?
I really doubt there is any reliable trustworthy way of measuring individual's contribution to the solution. Sometimes rewriting some complicated legacy code that results in less lines of code, less complicated solution (smaller cyclomatic complexity etc.) can be seen as a quite significant contribution, while in other cases deleting valuable code covering edge cases that results in the same statistics (less lines of code, smaller CC etc.) is definitely something bad. It all comes down to people, trust and cooperation, individualism in the team is almost always wrong and I would rather avoid it and especially not use it as a motivation factor.
This is a research topic on its own. There are several tools that have tried to define metrics like code ownership. There are other approaches which tackle other aspect of collaborative development, for instance the trustability we can have in the code.
There has been also several studies that tried to use the information from bug trackers. For instance, to identify the developer that is the more likely to introduce bugs. But it's hard to be objective (A brilliant developer that is assigned the most critical part of the system, will still be more likely to introduce critical bugs).
It's actually hard to monetize the development tasks. What is the cost of a bug? What is the gain of refactoring? That would be however one way to estimate the contribution of a developer.
The last cool tool I saw of this kind was the Game Plugin for Hudson continuous integration system. A score is assigned to each developer according their actions
-10 if they break the build
-1 for breaking a test
+1 for fixing a test
etc.
That's again a way to somehow assess the contribution of the developer.
All in all, I do feel like what you are asking for exist, but is still very immature.
I don't think you can get a tool to evaluate your share of the project. Measuring lines of source is all very well, but what of the quality of that source? You wouldn't want someone taking the credit for 200 lines of source if the job could have been easiy done in 20...
Also, thinking of my employer for a moment, a lot of people contribute to the project in ways other than code. Immediate examples I can think of would be Project Managers and Testers - both of whom are essential, both of whom rightly deserve some credit.
Martin
The only thing that I could imagine would be a voting system. I have absolutely no idea, if that would work in your team or anywhere - but I'm sure, that you will need humans for any realistic estimation of code quality.
In Stroustrup's Book on C++ I've read once "Don't try to solve social problems with technical means".
Thinking progmatically, the attitude and the ability of a programmer could be very quickly estimated by making a code-review together and having a talk on relevant topics.
Thinking as an IT-enthusiast and as a control-freak, this shouldn't be very hard, to implement a teachable machine-learning software, which uses version-cotrol, bug-database, etc and greates real-time performanced data for each contributor. E.g. R, KNIME or WEKA could be used for this.

How small is too small for a project plan? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have friends who have asked me to make websites and most are very small, usually I don't bother with a technical plan but one friend in particular clearly had goals larger than my own and the project is dragging on forever. If I had made a spec before the project I feel like this wouldn't have happened and our relationship would be as solid as before.
So my question is, how can you tell how small is too small? How do you tell when the project you're embarking on is going to end up in a guilt-ridden scope creep nightmare?
If you are going to be charging money (or don't want to be stuck doing the project forever), a project plan is always a good idea. Even if it's just a one-pager outlining what the web site will have (how many pages, any special features) and who is responsible for what. You should factor in that you'll spend 20% of your time (or whatever percentage past experience has taught you) on documentation or non-coding type work, you can give a better estimate of the effort needed. If it's a friend, you might want to tell them that you'll do the first X hours for free, but after that your rate is $Y per hour. Also, keep an accurate log of the time you've spent so that you can show them the amount of effort that is involved. Also, keeping an accurate log helps you estimate future projects.
As you may have already figured out, no project is too small to have at least an informal, written plan. Even if it's just a features list.
A project that does not need a plan is a project that does not need to be even started. In my opinion everything needs a plan, what changes is the extent of that plan. A plan could be just a list of deliverables and some deadline attached to each one. A more robust plan should include time charts, cost, phases, communications howto, dependencies, etc. So I think everything needs a plan, the contents of the plan is what changes depending on the project complexity.
Dwight Eisenhower on planning:
In preparing for battle I have always
found that plans are useless, but
planning is indispensable.
It seems the same in many software projects: you'll find that your plans need to be continually updated and that your first plan was quite different from what you finally completed. But that's okay, it's much better to put some planning in up front than to try something by the seat of your pants.
Agilists try to accommodate such changes in plans by breaking longer term plans into small "sprints" of 2-4 weeks. They'll have more details on the near term sprints, and fewer details on the longer term goals.
You'll especially want to be more detailed and precise if the project is bigger, if you are doing this for an external customer, or if you're attempting something new for you. It's less important (though not unimportant) for smaller projects and types of work you've done before and are very familiar with.

Software quality metrics [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I was wondering if anyone has experience in metrics used to measure software quality. I know there are code complexity metrics but I'm wondering if there is a specific way to measure how well it actually performs during it's lifetime. I don't mean runtime performance, but rather a measure of the quality. Any suggested tools that would help gather these are welcome too.
Is there measurements to answer these questions:
How easy is it to change/enhance the software, robustness
If it is a common/general enough piece of software, how reusable is it
How many defects were associated with the code
Has this needed to be redesigned/recoded
How long has this code been around
Do developers like how the code is designed and implemented
Seems like most of this would need to be closely tied with a CM and bug reporting tool.
If measuring code quality in the terms you put it would be such a straightforward job and the metrics accurate, there would probably be no need for Project Managers anymore. Even more, the distinction between good and poor managers would be very small. Because it isn't, that just shows that getting an accurate idea about the quality of your software, is no easy job.
Your questions span to multiple areas that are quantified differently or are very subjective to quantification, so you should group these into categories that correspond to common targets. Then you can assign an "importance" factor to each category and derive some metrics from that.
For instance you could use static code analysis tools for measuring the syntactic quality of your code and derive some metrics from that.
You could also derive metrics from bugs/lines of code using a bug tracking tool integrated with a version control system.
For measuring robustness, reuse and efficiency of the coding process you could evaluate the use of design patterns per feature developed (of course where it makes sense). There's no tool that will help you achieve this, but if you monitor your software growing bigger and put numbers on these it can give you a pretty good idea of how you project is evolving and if it's going in the right direction. Introducing code-review procedures could help you keep track of these easier and possibly address them early in the development process. A number to put on these could be the percentage of features implemented using the appropriate design patterns.
While metrics can be quite abstract and subjective, if you dedicate time to it and always try to improve them, it can give you useful information.
A few things to note about metrics in the software process though:
Unless you do them well, metrics could prove to be more harm than good.
Metrics are difficult to do well.
You should be cautious in using metrics to rate individual performance or offering bonus schemes. Once you do this everyone will try to cheat the system and your metrics will prove worthless.
If you are using Ruby, there are some tools to help you out with metrics ranging from LOCs/Method and Methods/Class Saikuros Cyclomatic complexity.
My boss actually held a presentation on software metric we use at a ruby conference last year, these are the slides.
A interesting tool that brings you a lot of metrics at once is metric_fu. It checks alot of interesting aspects of your code. Stuff that is highly similar, changes a lot, has a lot of branches. All signs your codes could be better :)
I imagine there are lot more tools like this for other languages too.
There is a good thread from the old Joel on Software Discussion groups about this.
I know that some SVN stat programs provide an overview over changed lines per submit. If you have a bugtracking system and persons fixing bugs adding features etc are stating their commit number when the bug is fixed you can then calculate how many line were affected by each bug/new feature request. This could give you a measurement of changeability.
The next thing is simply count the number of bugs found and set them in ratio to the number of code lines. There are some values how many bugs a high quality software should have per codeline.
You could do it in some economic way or in programmer's way.
In case of economic way you mesaure costs of improving code, fixing bugs, adding new features and so on. If you choose the second way, you may want to measure how much staff works with your program and how easy it is to, say, find and fix an average bug in human hours. Certainly they are not flawless, because costs depend on the market situation and human hours depend on the actual people and their skills, so it's better to combine both methods.
This way you get some instruments to mesaure quality of your code. Of course you should take into account the size of your project and other factors, but I hope main idea is clear.
A more customer focused metric would be the average time it takes for the software vendor to fix bugs and implement new features.
It is very easy to calculate, based on your bug tracking software's date created, and closed information.
If your average bug fixing/feature implementation time is extremely high, this could also be an indicator for bad software quality.
You may want to check the following page describing various different aspects of software quality including sample plots. Some of the quality characteristics you require to measure can be derived using tool such as Sonar. It is very important to figure out how would you want to model some of the following aspects:
Maintainability: You did mention about how easy is it to change/test the code or reuse the code. These are related with testability and re-usability aspect of maintainability which is considered to be key software quality characteristic. Thus, you could measure maintainability as a function of testability (unit test coverage) and re-usability (cohesiveness index of the code).
Defects: Defects alone may not be a good idea to measure. However, if you can model defect density, it could give you a good picture.

How to convince a client that all next projects/enhancements should be done via TDD (with some agile practices)? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We are a small team (3 developers) and one of our main clients is about to submit a bunch of new feature requests and a follow on project to us to get estimates on cost and delivery times. Our last project with them was a 'success' in that they are coming back to us but I know we could have done a much better job (we used waterfall... testing was an after thought and as a result unit-testing code coverage is significantly lower than we feel comfortable with, not to mention the never-ending 'we are ALMOST done' problem).
I have just finished reading 'Art of Unit Testing' and 'Working Effectively with Legacy Code' and I have used TDD on a pet project of mine outside of work and now I can never go back to waterfall and testing after the fact.
What I want to know is are there are good 'easy to digest' videos for non-developers that clearly show the benefits of TDD along with Agile practices in a business sense? I'd be super happy if there are any sub 10 minutes videos but I'm also OK with longer videos (and I will reference them to a time index in it). If there are no good videos then a written source is next best thing.
I want nothing more than for them to be on board and really excited with the transition.
For me it is not an option to 'just do it' as there will definitely be a learning curve for the other two developers and without doubt the first number of iterations may be stressful and bumpy and that needs to be communicated to our client.
[I have answered my own question below with a number of videos I found since asking the question... they are not perfect for my use but definately my plan B if no-one else knows of a better one]
Technical debt kills velocity. Thus, I like to include "No increased technical debt" in the Definition of Done. Without this, you can't achieve sustainable pace. This is illustrated by the picture below (borrowed from the Technical Debt - How not to ignore it presentation from Henrik Kniberg):
alt text http://img27.imageshack.us/img27/329/screenshotkq.png
To me, all these things are obvious and you can even prove it with numbers (by measuring the velocity over time). Explain these concepts to your client, explain him that TDD is one of the techniques allowing to control technical debt. Then, let him choose (or choose for him).
How you run your project internally is your business. Don't involve them in this decision. They are not experts in software development processes. Ask them about business requirements and things they know about.
Sound like you are doing this to improve project quality. Do you think it will cost more to do TDD? Why work to convince them of something and then ask their approval? Did you ask if you could do waterfall on the last project?
Why would your client even notice the transition to TDD? Stressful, bumpy; how so?
Tell the client why you are upgrading to TDD. I'm sure the reasons are as compelling to them as they are to you. To me, TDD first of all means a much greater sense of reliability on what you produce.
Surely your client remembers all the regressions and manual testing from your last project?
I don't know of any specific illustrations for you (the web is full of articles and blogs, but I'm not aware of any videos), but you pretty much answered your own question...
we used waterfall... testing was an after thought and as a result unit-testing code coverage is significantly lower than we feel comfortable with, not to mention the never-ending 'we are ALMOST done' problem
You just need to be honest with your client. Explain to them what the project methodology you used on your last project cost them in terms of flexibility, maintainability, and your ability to confidently provide them with quality code. Explain to them how TDD addresses that, and explain that you anticipate a slower start due to using a new methodology.
Illustrate for them, as concretely as possible, what they will gain, and it should be an easy sell. I would, however, approach it more from the "this is what we're planning on doing" angle, rather than the "can we please do this?" angle. Give them the impression (without being dishonest) that you are already planning on going this way and any change to that plan will be an inconvenience to you and your team, and will likely cost them productivity.
I'm not aware of any videos, but explain to them that it took you N man-hours to redesign a certain feature on the last project due to original design not being correct taht was not caught until you started testing; and with TDD it would take M (<<N) man-hours since you would not spend the extra hours working based on a bad design/bug as happened last time.
Also, explain that the confidence level of having less buggy software will be raised by Y percent due to thought-out tests.
Then explain you estimate X hours for learning curve on the FIRST peoject, and ask them if the given benefits on ALL future projects will be worth it, when the initial time investment is depreciated across those.
Firstly, unit testing isn't unique to Agile methodologies; I've been around a while and have seen it used on waterfall projects. In fact, I heard of unit testing long before I heard of Agile!
Afraid I can't point you to any videos that will help convince a client to switch development methodologies. Google may help though; if not with videos, then maybe with studies, blogs, etc.
Anyway, one suggestion for improving the chances that the client will accept your reduced productivity during your learning curve is to reduce his costs somehow. E.g. if you're billing by the hour either charge less by the hour for time spent learning, or just don't bill for those learning hours.
I spent the time since asking this question looking for the best videos I could and I've come across a number that are very close to what I need. I will post them here so that others will find them if they are in a similar position to me.
I realise that I asked more about TDD - but these videos capture a good portion of the message I'm trying to drive home... especially 'Why does Agile Software Development Pay' and 'Scrum in under 10 minutes'... it is the process of being responsive to change, producing higher quality code, having fewer defects and faster development cycles.
Agile vs waterfall: A Tale of Two Teams (8:20)
Why does Agile Software Development Pay? (5:03)
Scrum Basics (5:50)
Scrum in under 10 minutes (7:59)

Resources