How you assure software's code quality? Is it worth it in an agile environment? [closed] - software-quality

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
First of all sorry for the codeless question, but I like to clarify one thing.
I have a senior developer in a team who is actively pushing for code quality - merge request reviews, no crappy code and similar. But most of the other guys in the team has - get shit done mentality. Me as a business guy, I do not check the code at all, but If I would not have that guy who cares about the quality - in my opinion we would hit some heavy refactoring cycles at some point.
But of course there is a downside for caring carefully about the quality too much - it just takes time to do that. Maybe we will have to throw a lot of beautiful code when we will have to pivot as business needs changes.
Two questions: a) how do you keep the quality of your product? What practices do you use? b) Where is the line of caring about code quality enough (not too less and not too much)?

Code quality is important independent from whether you develop agile or not. You are absolutely right that quality improvement requires additional time. Most people fail because they spent their time in larger blocks ('refactoring projects') to more or less clean up code in arbitrary places with the objective of reducing the number of quality issues as much as possible.
The process I advise to use is to follow the boy-scout rule of always leaving the code that is changed a bit cleaner (better) than it was before. That means whenever you change a function, procedure, method or other code unit, fix the quality problems in it. The advantage is that you already understood the code (because you had to change it anyways) and it is going to be tested (because you need to test your original change). That means the additional effort for quality improvement (adding a comment, improving identifiers, removing redundancy, ...) is very low. In addition, you are improving only code that you are working with and don't waste time improving code that is never touched anyway.
Following the boy-scout rule ensures that the quality does not decrease, but instead steadily increases over time. It is also a reasonable level of 'caring'. I wrote some more about this here. In addition you need a good quality analysis tool like Teamscale that can reliably differentiate between legacy problems, new problems and problems in the code you recently changed.

Get and enforce a good unit testing framework and back it up with automated integration testing.
As for Agile I found the morning 10 minute scrums to be useful, but the end-of-sprint meetings tended to be long.

I made good experience with Sonar Qube a tool we are using for static code analysis. So we can keep track of
code smells, etc. in our code base. Another point is, that fixing issues can be planed in sprints! An IDE integration is available as well!

Related

How small is too small for a project plan? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have friends who have asked me to make websites and most are very small, usually I don't bother with a technical plan but one friend in particular clearly had goals larger than my own and the project is dragging on forever. If I had made a spec before the project I feel like this wouldn't have happened and our relationship would be as solid as before.
So my question is, how can you tell how small is too small? How do you tell when the project you're embarking on is going to end up in a guilt-ridden scope creep nightmare?
If you are going to be charging money (or don't want to be stuck doing the project forever), a project plan is always a good idea. Even if it's just a one-pager outlining what the web site will have (how many pages, any special features) and who is responsible for what. You should factor in that you'll spend 20% of your time (or whatever percentage past experience has taught you) on documentation or non-coding type work, you can give a better estimate of the effort needed. If it's a friend, you might want to tell them that you'll do the first X hours for free, but after that your rate is $Y per hour. Also, keep an accurate log of the time you've spent so that you can show them the amount of effort that is involved. Also, keeping an accurate log helps you estimate future projects.
As you may have already figured out, no project is too small to have at least an informal, written plan. Even if it's just a features list.
A project that does not need a plan is a project that does not need to be even started. In my opinion everything needs a plan, what changes is the extent of that plan. A plan could be just a list of deliverables and some deadline attached to each one. A more robust plan should include time charts, cost, phases, communications howto, dependencies, etc. So I think everything needs a plan, the contents of the plan is what changes depending on the project complexity.
Dwight Eisenhower on planning:
In preparing for battle I have always
found that plans are useless, but
planning is indispensable.
It seems the same in many software projects: you'll find that your plans need to be continually updated and that your first plan was quite different from what you finally completed. But that's okay, it's much better to put some planning in up front than to try something by the seat of your pants.
Agilists try to accommodate such changes in plans by breaking longer term plans into small "sprints" of 2-4 weeks. They'll have more details on the near term sprints, and fewer details on the longer term goals.
You'll especially want to be more detailed and precise if the project is bigger, if you are doing this for an external customer, or if you're attempting something new for you. It's less important (though not unimportant) for smaller projects and types of work you've done before and are very familiar with.

How to convince a client that all next projects/enhancements should be done via TDD (with some agile practices)? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We are a small team (3 developers) and one of our main clients is about to submit a bunch of new feature requests and a follow on project to us to get estimates on cost and delivery times. Our last project with them was a 'success' in that they are coming back to us but I know we could have done a much better job (we used waterfall... testing was an after thought and as a result unit-testing code coverage is significantly lower than we feel comfortable with, not to mention the never-ending 'we are ALMOST done' problem).
I have just finished reading 'Art of Unit Testing' and 'Working Effectively with Legacy Code' and I have used TDD on a pet project of mine outside of work and now I can never go back to waterfall and testing after the fact.
What I want to know is are there are good 'easy to digest' videos for non-developers that clearly show the benefits of TDD along with Agile practices in a business sense? I'd be super happy if there are any sub 10 minutes videos but I'm also OK with longer videos (and I will reference them to a time index in it). If there are no good videos then a written source is next best thing.
I want nothing more than for them to be on board and really excited with the transition.
For me it is not an option to 'just do it' as there will definitely be a learning curve for the other two developers and without doubt the first number of iterations may be stressful and bumpy and that needs to be communicated to our client.
[I have answered my own question below with a number of videos I found since asking the question... they are not perfect for my use but definately my plan B if no-one else knows of a better one]
Technical debt kills velocity. Thus, I like to include "No increased technical debt" in the Definition of Done. Without this, you can't achieve sustainable pace. This is illustrated by the picture below (borrowed from the Technical Debt - How not to ignore it presentation from Henrik Kniberg):
alt text http://img27.imageshack.us/img27/329/screenshotkq.png
To me, all these things are obvious and you can even prove it with numbers (by measuring the velocity over time). Explain these concepts to your client, explain him that TDD is one of the techniques allowing to control technical debt. Then, let him choose (or choose for him).
How you run your project internally is your business. Don't involve them in this decision. They are not experts in software development processes. Ask them about business requirements and things they know about.
Sound like you are doing this to improve project quality. Do you think it will cost more to do TDD? Why work to convince them of something and then ask their approval? Did you ask if you could do waterfall on the last project?
Why would your client even notice the transition to TDD? Stressful, bumpy; how so?
Tell the client why you are upgrading to TDD. I'm sure the reasons are as compelling to them as they are to you. To me, TDD first of all means a much greater sense of reliability on what you produce.
Surely your client remembers all the regressions and manual testing from your last project?
I don't know of any specific illustrations for you (the web is full of articles and blogs, but I'm not aware of any videos), but you pretty much answered your own question...
we used waterfall... testing was an after thought and as a result unit-testing code coverage is significantly lower than we feel comfortable with, not to mention the never-ending 'we are ALMOST done' problem
You just need to be honest with your client. Explain to them what the project methodology you used on your last project cost them in terms of flexibility, maintainability, and your ability to confidently provide them with quality code. Explain to them how TDD addresses that, and explain that you anticipate a slower start due to using a new methodology.
Illustrate for them, as concretely as possible, what they will gain, and it should be an easy sell. I would, however, approach it more from the "this is what we're planning on doing" angle, rather than the "can we please do this?" angle. Give them the impression (without being dishonest) that you are already planning on going this way and any change to that plan will be an inconvenience to you and your team, and will likely cost them productivity.
I'm not aware of any videos, but explain to them that it took you N man-hours to redesign a certain feature on the last project due to original design not being correct taht was not caught until you started testing; and with TDD it would take M (<<N) man-hours since you would not spend the extra hours working based on a bad design/bug as happened last time.
Also, explain that the confidence level of having less buggy software will be raised by Y percent due to thought-out tests.
Then explain you estimate X hours for learning curve on the FIRST peoject, and ask them if the given benefits on ALL future projects will be worth it, when the initial time investment is depreciated across those.
Firstly, unit testing isn't unique to Agile methodologies; I've been around a while and have seen it used on waterfall projects. In fact, I heard of unit testing long before I heard of Agile!
Afraid I can't point you to any videos that will help convince a client to switch development methodologies. Google may help though; if not with videos, then maybe with studies, blogs, etc.
Anyway, one suggestion for improving the chances that the client will accept your reduced productivity during your learning curve is to reduce his costs somehow. E.g. if you're billing by the hour either charge less by the hour for time spent learning, or just don't bill for those learning hours.
I spent the time since asking this question looking for the best videos I could and I've come across a number that are very close to what I need. I will post them here so that others will find them if they are in a similar position to me.
I realise that I asked more about TDD - but these videos capture a good portion of the message I'm trying to drive home... especially 'Why does Agile Software Development Pay' and 'Scrum in under 10 minutes'... it is the process of being responsive to change, producing higher quality code, having fewer defects and faster development cycles.
Agile vs waterfall: A Tale of Two Teams (8:20)
Why does Agile Software Development Pay? (5:03)
Scrum Basics (5:50)
Scrum in under 10 minutes (7:59)

How do you refine your estimation process? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Estimating how long any given task will take seems to be one of the hardest parts about software development. At my current shop we estimate tasks in hours at the start of an iteration, but once the task is complete we do not use it to aide us in future estimations.
How do you use the information you gather from past estimations to refine future ones?
By far one of the most interesting approaches I've ever seen for scheduling realistically is Evidence Based Scheduling which is part of the FogCreek FogBugz 6.0 release. See Joel's blog post linked above for a synopsis and some examples.
If an estimate blew out, attempt to identify if it was just random (environment broke, some once off tricky bug etc) or if there was something that you didn't identify.
If an esimate was way too large, identify what it was that you thought was going to take so long and work out why it didn't.
Doing that enough will hopefully help developers in their estimates.
For example, if a dev thinks that writing tests for a controller is going to take ages and then it ends up taking less time than he imagined, the next estimate you make for a controller of similar scope you can keep that in mind.
I estimate with my teammates iteratively until we reach consensus. Sure, we make mistakes but we don't calculate the "velocity" factor explicitely but rather, we use gathered experience in our new estimation debates.
I've found that estimating time will get you so far. Interuptions with other tasks, unforseen circumstances or project influences will inevitably change your time frames and if you were to constantly re-asses you would waste much time managing when you could be developing.
So for us here, we give an initial estimation based on experience to the solution for time (we do not use a model, I've not found one that works well enough in our environment) but do not judge our KPIs against it, nor do we assure the business that this deadline WILL be hit. Our development approach here is largely reactive, and it seems to fill the business' requirements of us very well.
when estimates are off, there is almost always a blatant cause, which leads to a lesson learned. Recent ones from memory:
user interface assumed .NET functionality that did not exist (the ability to insert a new row and edit it inline in a GridView); lesson learned is to verify functionality of chosen classes before committing to estimate. This mistake cost a week.
ftp process assumed that FtpWebRequest could talk to a bank's secure ftp server; it turned out that there's a known bug with this class if the ftp server returns anything other than a backslash for the current directory; lesson learned is to google for 'bug' and 'problem' with class name, not just 'tutorial' and 'example' to make sure there are no 'gotchas' lurking. This mistake cost three days.
these lessons go into a Project Estimation and Development "checklist" document, so they won't be forgotten for the next project

Need for refactoring? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Why should companies invest in refactoring components, though it is not going to add any new feature to the product ?
I agree it is to clean the code, fix bugs and remove dead code - but what is the take ??
Maintenance. It will reduce your maintenance costs significantly. There is no comparison between fully factored code and the junk that sits in most companies repositories. The latter is virtually worthless, while the former is gold.
It depends who you ask. A non-technical manager may say there is no need. A support developer would say that it would help keep the maintenance costs down.
Refactoring needs to be a part of your every-day job. You constantly refactor your code to make it more readable/maintainable/robust/reusable, etc.
Your code is a living document. If it doesn't change over time, it becomes stagnant.
Invest in testing. Invest in refactoring. Invest in writing good code.
maintenance. Sometimes, a project gets to big or with too many "fast" patches to be further expanded. You just have to sit down calmly and clean and refactor.
While the other answers are all true the power of Refactoring is that it allows you to change the design of your code with predictable results. The biggest problem of maintenance is that it is virtually impossible to anticipate all requirements for complex applications.
Most of these can be dealt with by adding a new feature like a new report or command. But other will require part of your application to be redesigned. This is where refactoring and it's sibling unit-testing comes into play. By using refactoring techniques you can make needed design changes safely.
It is not a cure all technique but another tool that improves the quality of your code. (stuructured programming, object-orientation, etc).
To start off with: Refactoring is a tax. If the code works, then you are spending time fixing code that already works, I can see the business types looking quizzical now. A saying I like is "Legacy is another word for code that works."
Now there are many problems with a growing code base that need to be addressed before you start to spend more time maintaining the code that developing features.
Personally I like the "No Broken Windows" philosopy.
If it ain't broke don't fix it.
But if you need to start fitting the components into new unpredictable requirements then it often makes sense to identify the bits you can extract and reuse. You need to be certain that your changes are introducing unexpected bugs - so you'll need good test coverage.

How Much Time Should be Allotted for Testing & Bug Fixing [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Every time I have to estimate time for a project (or review someone else's estimate), time is allotted for testing/bug fixing that will be done between the alpha and production releases. I know very well that estimating so far into the future regarding a problem-set of unknown size is not a good recipe for a successful estimate. However for a variety of reasons, a defined number of hours invariably gets assigned at the outset to this segment of work. And the farther off this initial estimate is from the real, final value, the more grief those involved with the debugging will have to take later on when they go "over" the estimate.
So my question is: what is the best strategy you have seen with regards to making estimates like this? A flat percentage of the overall dev estimate? Set number of hours (with the expectation that it will go up)? Something else?
Something else to consider: how would you answer this differently if the client is responsible for testing (as opposed to internal QA) and you have to assign an amount of time for responding to the bugs that they may or may not find (so you need to figure out time estimates for bug fixing but not for testing)
It really depends on a lot of factors. To mention but a few: the development methodology you are using, the amount of testing resource you have, the number of developers available at this stage in the project (many project managers will move people onto something new at the end).
As Rob Rolnick says 1:1 is a good rule of thumb- however in cases where a specification is bad the client may push for "bugs" which are actually badly specified features. I was recently involved in a project which used many releases but more time was spent on bug fixing than actual development due to the terrible specification.
Ensure a good specification/design and your testing/bug fixing time will be reduced because it will be easier for testers to see what and how to test and any clients will have less lee-way to push for extra features.
Maybe I just write buggy code, but I like having a 1:1 ratio between devs and tests. I don't wait until alpha to test, but rather do it throughout the whole project. The logic? Depending on your release schedule, there could be a good deal of time between when development starts and when your alpha, beta, and ship dates are. Furthermore, the earlier you catch bugs, the easier (and cheaper) they are to fix.
A good tester, who find bugs soon after each check-in, is invaluable. (Or, better yet, before a check-in from a PR or DPK) Simply put, I am still extremely familiar with my code, so most bug fixes become super simple. With this approach, I tend to leave roughly 15% of my dev time to bug fixing. At least when I do estimates. So in a 16 week run I'd leave around 2-3 weeks.
Only a good amount of accumulated statistics from previous projects can help you to give precise estimates. If you have a well defined set of requirements, you can make a rough calculation of how many use cases you have. As I said you need to have some statistics for your team. You need to know average bugs-per-loc number to estimate total bugs count. If you don't have such numbers for your team, you can use industry average numbers. After you have estimated LOC (number of use cases * NLOC) and average bugs-per-lines, you can give more or less accurate estimation on time required to release project.
From my practical experience, time spent on bug-fixing is equal to or more (in 99% cases :) ) than time spent on original implementation.
From the testing Bible:
Testing Computer Software
p. 31: "Testing [...] accounts for 45% of initial development of a product." A good rule of thumb is thus to allocate about half of your total effort to testing during initial development.
Use a language with Design-by-Contract or "Code-contracts" (preconditions, check assertions, post-conditions, class-invariants, etc) to get "testing" as close to your classes and class features (methods and properties) as possible. Then use TDD to test your code with its contracts.
Use as much self-built code-generation as you possibly can. Generated code is proven, predictable, easier to debug, and easier/faster to fix than all-hand-coded code. Why write what you can generate? However, do not use OPG (other-peoples-generators)! Code YOU generate is code you control and know.
You can expect to spend an inverting ratio over the course of your project--that is--you will write lots of hand-code and contracts in the start (1:1) of your project. As you see patterns, teach a code generator YOU WRITE to generate the code for you and reuse it. The more you generate, the less you design, write, debug, and test. By the end of the project, you will find that your equation has inverted: You're writing less of your core-code, and your focus shifts to your "leaf-code" (last-mile) or specialized (vs generalized and generated) code.
Finally--get a code analyzer. A good, automated code analysis rule system and engine will save you oodles of time finding "stupid-bugs" because there are well-known gotchas in how people write code in particular languages. In Eiffel, we now have Eiffel Inspector, where we not only use the 90+ rules coming with it, but are learning to write our own rules for our own discovered "gotchas". Such analyzers not only save you in terms of bugs, but enhance your design--even GREEN programmers "get it" rather quickly and stop making rookie mistakes earlier and learn faster!
The rule of thumb for rewriting existing systems is this: "If it took 10 years to write, it will take 10 years to re-write." In our case, using Eiffel, Design-by-Contract, Code Analysis, and Code Generation, we have re-written a 14 year system in 4 years and will fully deliver in 4 1/2. The new system is about 4x to 5x more complex than the old system, so this is saying a lot!

Resources