How do you fight design complexity? [closed] - project-management

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I often find myself fighting overengineering -- the person in charge of designing the software comes up with an architecture that's, way, way overcomplicated.
It's all fine and dandy to have all the esoteric features that users will never know about and get that sense of achievement when you're doing something that all the magazine articles are telling you is the latest, cool thing, but we are going to spend half of our engineering time on this monument to our cleverness, and not, you know, the actual product that our users need and upper management expects to be completed within a reasonable or at least a bounded time frame.
And you'll probably just have to revert back to the simpler solution anyway when you start running out of time, that is, if you get that chance.
We've all heard the refrain: Keep It Simple, Stupid™.
How do you fight with overcomplexity in your team?
One example I've had to work with repeatedly lately is when the decision has been made to go to a fully denormalized database design rather than an RDBMS. "because it's faster!" Fully denormalized databases are really hard to get right, and are only appropriate for really specialized data problems like Flickr or ebay, and which can be extremely expensive in terms of developer time relative to the rest of your development.

Tooth and nail, and sometimes you lose. The problem is that it's always easy to be tempted to build something cool.
Why build something simple and efficient when it can be complicated and wonderful?
Try to remind people of the XP rule of building the simplest thing that can possibly work.

Make sure to bounce any ideas you have off of someone else. Oftentimes, we get so wrapped up in doing things a certain way that it takes another set of eyes to set you right. There've been many times that I've figured out difficult problems by having somebody else there to say "do we really need that?" This helps make my code simpler.
On the point of dealing with people you disagree with, Ward Cunningham has a good point:
It was a turning point in my programming career when I realized that I didn't have to win every argument. I'd be talking about code with someone, and I'd say, "I think the best way to do it is A." And they'd say, "I think the best way to do it is B. I'd say, "Well no, it's really A." And they'd say, "Well, we want to do B." It was a turning point for me when I could say, "Fine. Do B. It's not going to hurt us that much if I'm wrong. It's not going to hurt us that much if I'm right and you do B, because, we can correct mistakes. So lets find out if it's a mistake. ... Usually it turns out to be C. It's a learning experience for both of us. If we decide without the experience, neither of us really learns. Ward won, and somebody else didn't. Or vice versa. It's too much of a battle. Why not say, "Well, let's just code it up and see what happens. If it doesn't work, we'll change it.""
My advice? If you want to do something better, come up with a simple prototype that demonstrates that it's better. Opinions are great, but code talks.

I have seen this formula somewhere:
skill = complexity of problem / complexity of solution http://img39.imageshack.us/img39/1586/whatisskill.png
In other words, it requires skill to create a simple solution to a complex problem. If somebody purposefully designs and takes pride in complex overengineered solutions, then he is unconsciously incompetent.
Personally, what helps me to keep my designs simple, is the TDD cycle. First write a test that specifies what you're trying to reach, and then produce "the simplest thing that could possibly work". And every now and then, reflect on what you have produced, and think about how to make it more simple.
Never build extra flexibility and abstraction layers into the system, until it is required by something that you have now. Changing the code is easy, when you have a good unit test suite, so you can add those abstraction layers later, when the need arises, if it ever arises. Otherwise, "you ain't gonna need it".
Some symptoms of too complex design are when writing tests is complicated. If the tests require a long setup code, maybe you have too many dependencies or in some other way too much complexity. If you run into concurrency bugs, then maybe you should think about how to design the system so that concurrency is restricted to the absolute minimum number of classes. Maybe use a message-passing architecture, such as the Actor model, and make practically every component single-threaded, even though the system as a whole is multi-threaded.

At least for me, the bigger issue is that it's often hard to tell what feature is in there because of its buzzword-friendly, magaziney enterprisey goodness and which is in there because it adds a level of flexibility that will be useful in the future.
It's been shown that people are generally terrible at anticipating future complexity, and side-effects of current decisions. Unfortunately this doesn't always mean simplest is best - in my case, there've been loads of things I thought were too complicated at first and didn't see the value of until much later (er... spring). Also things I thought made sense that turned out to be wildly overcomplicated (EJB1). So I know that my intuition about these things is faulty.
Best bet - any kind of indirection layer should be supported with an argument supporting the value of the flexibility it adds vs. its added dev complexity.
However, people who are dogmatically maintaining a particular db setup on abstract grounds are probably in the "building it because I read that it's the right thing" camp. It might be unrealistic, but some people might be convinced if you build a test version and benchmark, especially if the results show more effort leading to an insignificant performance increase.

It's all fine and dandy to have all
the esoteric features that users will
never know about and...
This would be feature creep, not unnecessarily complicated design. It's different from your example on databases.
One example I've had to work with
repeatedly lately is when the decision
has been made to go to a fully
denormalized database design rather
than an RDBMS. "because it's faster!"
In this case several things may be going on. One of them is, you might be wrong and these people could really know what they are saying because they have worked with very similar examples. Another is they might be wrong, i.e. their design doesn't offer the speed advantages they claim. In this case there could be two different situations: (1) They are giving speed too much weight in their design, or (2) speed is really critical. If speed is indeed so relevant, the team shouldn't rely only in assumptions - they should try different prototypes and evaluate their speed in the critical paths. You don't build an F1 car in one way just "because it's faster", instead you keep trying several alternative design solutions and pick the fastest one which still doesn't increase maintenance costs too much.
Sometimes you can argue it and reach an agreement, sometimes you can't. It's life.
A final word, though. You don't fight complexity. You treat it. You identify the really important things and act accordingly.

I assume you mean "fully denormalized database design rather than a normalized (e.g., third or fourth normal form) model", because a relational model is managed by an RDBMS regardless of how normalized it is.
Can't judge without knowing more about your requirements, your abilities, and those of your teammates.
I fear that your KISS admonition might not work in this case, because one big, denormalized table might be defended as the simplest thing possible.
How does anybody solve these kinds of problems? Communication, persuasion, better data, prototypes of alternative technologies and techniques. This is what makes software development so hard. If there was only one way to do these things, and everyone agreed on them, we truly could script it or get anyone to develop systems and be done with it.
Get some data. Your words might not be enough. If a quick prototype can demonstrate your point, create it.
"Strong opinions, lightly held" should be your motto.
Sometimes a technical point isn't worth alienating your entire team. Sometimes it is. Your call.

You fight someone else's overdesign/feature creep in several ways:
Request feature priority, based on actual user requirements. Mock up features for alpha and beta testers, and ask if they would trade N months of delay for it.
Aggressively refactor to avoid special-casing. Break code into layers or modular components whenever appropriate. Find a balance between "works just fine now" and "easy to extend later".
Notify your management when you disagree with design decisions, prepare to be overruled, and accept the decision. Don't go over anyone's head or sabotage code.

The best way I have found is to relentlessly ask - again and again - 'What is the business problem we are trying to solve' and 'How does this decision help to solve that problem'.
I find that too often folks jump to solutions instead of being crystal clear on what the problem is.
So in your example of how to organize a database, my question would be 'What do we think are the transaction requirements for this project today, next month, next year, five years from now'. It could be that it makes sense to spend a lot of time to get the data model right, it could be a waste of time. You don't know what the parameters are until you get the problem definition clear.

You may suffer from "too many architects in the team" syndrome. One or two people at most should design/architect a system which will be coded by a team of 5 to 10 people. Input is welcome from everyone but architectural decision makers should be few and experienced.
(the numbers are semi-random and could be different depending on other factors as well)

I try to be open when discussing matters. But when i am discussing with someone else between something that seems simple and another one complicated, i get as stubborn as can be. It helps quite a lot, so long as you are very coherent from one decision to another.

Your example isn't really a complicated design, it's a design choice that you don't agree with. Since you're working on the code, you could easily be right because many of these decisions are made by people reading an article in an article and thinking it sounds like a good goal, or the decision could have been made by someone who ran into problems before and was trying to prevent it from happening again.
Personally I've done a lot of stuff the easy way and a lot of it the hard way, and I'm never happy when I choose doing something the easy way over the hard way. Now I've learned tricks like "never pass around a naked collection, always wrap it in a business class".
If I were to explain the reasoning behind that to someone who hadn't been through the same experiences, they wouldn't understand it until they tried comparing the "easy way" to the "hard way" a few times.

The solution should be no more complex than the problem.
The question intertwines itself with the thought of essential complexity. A sort must touch each element, by its essence. How much more complex must it then get, to solve the problem, given the technical constraints existing on it?

Do the people involved have enough time and incentive to find a simple solution? Without care, complexity will increase. If you spend most of your time trying to do the quickest possible bug fix or feature addition then saying "keep it simple" will not be enough.
Ensure that there are some people on the team with war wounds from maintaining large programs, and people with experience of refactoring, and then give them time to sort the software out. If you can arrange that certain features and opportunities are out of scope, that will help people remove unneeded code. If you want a metric, aim to reduce lines of code; but try not to obsess over it. Improve the test coverage; consider eliminating bits of code which are hard to test.

Don't try to do everything at a stretch. Break every problem/task into manageable chunks. Then prioritize, keeping KISS and YAGNI in mind. This will help you focus on building what you need. If you've done it right, you'll have a good core you can add to later, given time, money, resources and inspiration.

Related

IT evaluating quality of coding - how do we know what's good? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Coming from an IT background, I've been involved with software projects but I'm not a programmer. One of my biggest challenges is that having a lot of experience in IT, people often turn to me to manage projects that include software development. The projects are usually outsourced and there isnt a budget for a full time architect or PM, which leaves me in a position to evaluate the work being performed.
Where I've managed to get through this in the past, I'm (with good reason) uneasy about accepting these responsibilities.
My question is, from a perspective of being technically experienced but not in programming, how can I evaluate whether coding is written well besides just determining if it works or not? Are there methodologies, tips, tricks of the trade, flags, signs, anything that would say - hey this is junk or hey this is pretty damn good?
Great question. Should get some good responses.
Code cleanliness (indented well, file organization, folder structure)
Well commented (not just inline comments, but variables that say what they are, functions that say what they do, etc.)
Small understandable functions/methods (no crazy 300 line methods that do all sorts of things with nested if logic all over the place)
Follows SOLID principles
Is the amount of unit test code similar in size and quality as the code base of the project
Is the interface code separate from the business logic code which in turn should be separate from the infrastructure access code (email, database, web services, file system, etc.)
What does a performance analysis tool think of the code (NDepend, NDoc, NCover, etc.)
There is a lot more to this...but this gets your started.
Code has 2 primary audiences:
The people who use it
The people who develop it
So you neeed 2 simple tests:
Run the code. Can you get it to do the job it is supposed to do?
Read the code. Can you understand the general intentions of the developer?
If you can answer yes to both of these, it is great code.
When reading the code, don't worry that you are not a programmer. If code is well written / documented, even a non-programmer should be able to see guess much of what it is intended to achieve.
BTW: Great question! I wish more non-programmers cared about code quality.
First, set ground rules (that all programmers sign up to) that say what's 'good' and what isn't. Automate tests for those that you can measure (e.g. functions less than a number of lines, McCabe complexity, idioms that your coders find confusing). Then accept that 'good coding' is something you know when you see rather than something you can actually pin down with a set of rules, and allow people to deviate from the standard provided they get agreement from someone with more experience. Similarly, such standards have to be living documents, adapted in the face of feedback.
Code reviews also work well, since not all such 'good style' rules can be automatically determined. Experienced programmers can say what they don't like about inexperienced programmers' code - and you have to get the original authors to change it so that they learn from their mistakes - and inexperienced programmers can say what they find hard to understand about other people's code - and, by being forced to read other people's code, they'll also learn new tricks. Again, this will give you feedback on your standard.
On some of your specific points, complexity and function size work well, as does code coverage during repeatable (unit) testing, but that last point comes with a caveat: unless you're working on something where high quality standards are a necessity (embedded code, as an example, or safety-critical code) 100% code coverage means you're testing the 10% of code paths that are worthwhile to test and the 90% that almost never get coded wrong in the first place. Worthwhile tests are the ones that find bugs and improve maintainability.
I think it's great you're trying to evaluate something that typically isn't evaluated. There have been some good answers above already. You've already shown yourself to be more mature in dealing with software by accepting that since you don't practice development personally, you can't assume that writing software is easy.
Do you know a developer whose work you trust? Perhaps have that person be a part of the evaluation process.
how can I evaluate whether coding is written well
There are various ways/metrics to define 'well'or 'good', for example:
Delivered on time
Delivered quickly
No bugs after delivery
Easy to install
Well documented
Runs quickly
Uses cheap hardware
Uses cheap software
Didn't cost much to write
Easy to administer
Easy to use
Easy to alter (i.e. add new features)
Easy to port to new hardware
...etc...
Of these, programmers tend to value "easy to alter": because, their job is to alter existing software.
Its a difficult one and could be where your non-functional requirements will help you
specify your performance requirements: transactions per second, response time, expected DB records over time,
require the delivery to include outcome from a performance analysis tool
specify the machine the application will be running on, you should not have to upgrade your hardware to run the app
For eyeballing the code and working out whether or not its well written its tougher, the answers from #Andrew & #Chris cover it pretty much... you want code that looks good, is easy to maintain and is performant.
Summary
Use Joel Test.
Why?
Thanks for tough question. I was about to write a long answer on merits of direct and indirect code evaluation, understanding your organisational context, perspective, figuring out a process and setting a criteria for code to be good enough, and then the difference between the code being perfect and just good enough which still might mean “very impressive”. I was about to refer to Steve McConnell’s Code Complete and even suggest delegating code audit to someone impartial you can trust, who is savvy enough business and programming-wise to get a grasp of the context, perspective, apply the criteria sensibly and report results neatly back to you. I was going to recommend looking at parts of UI that are normally out of end-user reach in the same way as one would be judging quality of cleaning by checking for dirt in hard-to-reach places.
Well, and then it struck me: what is the end goal? In most, but very few edge cowboy-coding scenarios, as a result of the audit you’re likely to discover that the code is better than junk, but certainly not damn good, maybe just slightly below the good enough mark. And then what is next? There are probably going to be a few choices:
Changing the supplier.
Insisting on the code being re-factored.
Leaving things as they are and from that point on demanding better code.
Unfortunately, none of the options is ideal or very good either. Having made an investment changing supplier is costly and quite risky: part of the software conceptual integrity will be lost, your company will have to, albeit indirectly, swallow the inevitable cost of the new supplier taking over the development and going through the learning curve (exactly opposite to that most suppliers are going to tell you to try and get their foot in the door). And there is going to be a big risk of missing the original deadlines.
The option of insisting on code re-factoring isn’t perfect either. There is going to be a question of cost and it’s very likely that for various contractual and historical reasons you won’t find yourself in a good negotiation position. In any case re-writing software is likely to affect deadlines and the organisation what couldn’t do the job right the first time is very unlikely to produce much better code on the second attempt. The latter is pertinent to the third option I would be dubious of any company producing a better code without some, often significant, organisational change. Leaving things as they are not good either: a piece of rotten code unless totally isolated is going to eventually poison the rest of the source.
This brings me to the actual conclusion, or in fact two:
Concentrate on picking the right software company in a first place, since going forward options are going to be somewhat constrained.
Make use of IT and management knowledge to pick a company that is focused on attracting and retaining good developers, that creates a working environment and culture fit for production of good quality code instead of relying on the post factum analysis.
It’s needless to expand on the importance of choosing the right company in the first place as opposed to summative evaluation of delivered project; hopefully the point is already made.
Well, how do we know the software company is right? Here I fully subscribe to the philosophy evangelised by Joel Spolsky: quality of software directly depends on quality of people involved which as it has been indicated by several studies can vary by an order of magnitude. And through the workings of free markets developers end up clustered in companies based on how much a particular company cares about attracting and retaining them.
As a general rule of life, best programmers end up working with the best, good with good, average with average and cowboy coders with other cowboy coders. However, there is a caveat. Most companies would have at least one or two very good developers they care about and try their hardest to retain. These devs are always put on a frontline: to fire fight, to lure a customer, to prove the organisation potential and competence. Working amongst more than average colleagues, overstretched between multiple projects, and being treated as royalty, sadly, these star programmers very often loose touch with the reality and become prima donnas who won’t “dirty” their hands with any actual programming work.
Unfortunately, programming talent doesn’t scale and it’s unlikely that the prima donna is going to work on your project past the initial phase designed to lure and lock in you as a customer. At the end the code is going to be produced by a less talented colleague and as a result you’ll get what you’ll get.
The solution is to look for a company there developer talents are more consistent and everyone is at least good enough to produce the right quality of code. And when it comes to choosing such an organisation that’s where Joel Test comes mighty handy. I believe it’s especially suitable for application by someone who has no programming experience but good understanding of IT and management.
The more points company scores on the Joel Test the more it’s likely to attract and retain good developers and most importantly provide them with the conditions to produce quality code. And since most great devs are actually in love with programming all the need is to be teamed up, given good and supportive work environment, a credible goal (or even better incredible) and they’ll start chucking out high quality code. It’s that simple.
Well, the only thing is that company that scores full twelve points on the Joel’s Test is likely to charge more than a sweatshop that scores a mere 3 or 5 (a self-estimated industry average). However, the benefits of having the synergy of efficient operations and bespoke trouble-free software that leverage strategic organisational goals will undoubtedly produce exceptional return on investment and overcome any hurdle rates by far outweighing any project costs. I mean, at the end of the day the company's work will likely be worth the money, every penny of it.
Also hope that someone will find this longish answer worthwhile.

Premature refactoring? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
We have all heard of premature optimization, but what do you think about premature refactoring? Is there any such thing in your opinion? Here is what I am getting at.
First off, reading Martin Fowler's seminal work "Refactoring" quite literally changed my life in regards to programming.
One thing that I have noticed, however, is that if I start refactoring a class or framework too quickly, I sometimes find myself coded into a corner so-to-speak. Now, I suspect that the issue is not really refactoring per se, but maybe premature/poor design decisions/assumptions.
What are your thoughts, insights and/or opinions on this issue? Do you have any advice or common anti-patterns related to this issue?
EDIT:
From reading your answers and reflecting on this issue more, I think I have come to the realization that my problem in this case is really an issue of "premature design" and not necessarily "premature refactoring". I have been guilty of assuming a design and refactoring in that direction to early in the coding process. A little patience on my part to maintain a level of design agnosticism and focus on refactoring towards clean code would keep me from heading down these design rabbit trails.
I actually think the opposite.
The earlier you start thinking about whether or not your design needs refactoring, the better. Refactor constantly, so it's never a large issue.
I've also found that the more I refactor early on, the better I've gotten about writing code more cleanly up front. I tend to create fewer large methods, and have fewer problems.
However, if you find yourself "refactoring" yourself into a corner, I'd expect that is more a matter of lack of initial design or lack of planning for the scope of use of a class. Try writing out how you want to use the class or framework before you start writing the code - it may help you avoid that issue. This is also I think one advantage to test driven design - it helps you force yourself to look at using your object before it's written.
Remember, refactoring technically should NEVER lock you into a corner - it's about reworking the internals without changing how a class is used. If your trapping yourself by refactoring, it means your initial design was flawed.
Chances are you'll find that, over time, this issue gets better and better. Your class and framework design will probably end up more flexible.
We have all heard of Premature Optimization, but what do you thing about Premature Refactoring? Is there any such thing in your opinion?
Yes, there is. Refactoring is a way of paying down technical debt that has accrued over the life of your development process. However, the mere accrual of technical debt is not necessarily a bad thing.
To see why, imagine that you are writing tax-return analysis software for the IRS. Suddenly, new regulations are introduced at the last minute which break several of your original assumptions. Although you designed well, your domain model has fundamentally shifted from under your feet in at least one important place. It's April 14th, and the project must go live tomorrow, come hell or high water. What do you do?
If you implement a nuts-and-bolts solution at the cost of some moderate technical debt, your system will become more rigid and less able to withstand another round of these changes. But the site can go live and proceed onward, and there will be no risk of delivering late; you're confident you can make the required changes.
On the other hand, if you take the time to refactor the solution so that it now supports the new design in more sophisticated and flexible way, you'll have no trouble adapting to future changes. But you run the risk of your company's flagship product running up against the clock; you're not sure if the redesign will take longer than today.
In this case, the first option is the better choice. Assuming you have little previous technical debt, it's worth it to take your lumps now and pay it down later. This is, of course, a business decision, and not a design one.
I think it is possible to refactor too early.
At the nuts and bolts end of design is the code itself. This final stage of the design comes in to existence as you code, it will at times be flawed, and you'll see that as the code evolves. If you refactor too early it makes it harder to change the flawed design.
For example, it's much easier to delete a single long function when you realise it's rubbish or going in the wrong direction than it is to delete a nice well-formed function and the functions it uses and the functions they use, etc., whilst ensuring you're not breaking something else that was part of the refactor.
It could be said that perhaps you should have spent more time designing, but a key element in an agile process is that coding is part of the design process and in most cases, having put some reasonable effort into design, it's better to just get on with it.
Edit In response to comments:-
Design isn't done until you've written code. We can't solve all problems in pre-coding design, the whole point behind Agile is that coding is problem solving. If the non-code design solved all problems up-front before coding there would be no need to re factor, we would simply convert the design to well factored code in one step.
Anyone remember the late 1980s and early 1990s structured design methods, the ones where you got all the problems solved in clever diagrams before you wrote a line of code?
Premature refactoring is refactoring without unit-tests. You are at that point simply not ready for a refactoring. First get some unit-tests and then start thinking about refactoring. Otherwise you will (might) hurt the project more than help.
I am a strong believer in constant refactoring. There is no reason to wait until some specific time to start refactoring.
Anytime you see something that should be done better, Refactor.
Just keep this in my mind. I know a developer (a pure genius) who refactors so much (he is so smart he can always find a better way) he never finishes a project.
I think any "1.0" project is susceptible to this kind of ... let's call it "iterative design". If you don't have a clear spec before you start designing you're objects, you'll likely think of many designs and approaches to problems.
So, I think overcoming this specific problem is to clearly design things before you start writing code.
There are a couple of promising solutions to this type of problem, depending on the situation.
If the problem is that you decide something can be optimized in a certain way and you extract a method or something and realize that because of that decision, you are forced to code everything else in a convoluted way, the problem is probably that you didn't think far enough in the design process. If there had been a well written and planned spec, you would have known about this problem ahead of time (unless you didn't read the spec, but that's another issue :) )
Depending on the situation, rapid prototyping can also address this problem, since you'll have a better idea of these implementation details when you start working on the real thing.
The reason why premature optimization is bad is that optimization usually leads to a worse design. Unlike refactoring, which leads to a better and cleaner design, if done thoughtful and right. What I learned to be useful for me to analyze the usefulness of a refactoring was first looking at our UML diagram to visualize the change and then writing the code-doc (e.g Javadoc) for the class first and adding stubs ahead of any real code. Of course experience help a lot with that, if in doubt, ask your favorite architect ;)

How to justify to your colleagues that they produce crappy code?

I am finding somewhat difficult to carry on working in my current job.
The codebase has become a bit wild lately (but definitely not the worse I've seen), and I'm having a hard time dealing with some parts of the code. I could be stupid, but most likely it's just that it demotivates me a lot to start working on something that is hard to reason about.
My boss is already aware of my thoughts - I expressed what it feels like to work like this. He asked me to provide examples of what was wrong. When I pointed out two or three small issues, he said "yeah, ok" but that refactoring costs him a lot of money, and that we have to get the product out (not the first time I hear this).
I have to admit that the examples were not the most compelling, but the problem is actually tough to explain. It's made up of a lot of tiny "bad decisions" throughout the codebase. (We also see this issue is absolutely subjective). For instance, bad naming, dealing with nulls, boilerplate, not making code reusable (or the opposite) and so on. It can be tiring to re-think someone else's code over again to justify I would have done it differently.
Do you have thoughts on how to deal with this?
I am a bit fed up of having to go hacking around a quick 'n dirty codebase every time!
Sometimes your fellow programmers do things very differently than you, and things you might feel are way wrong might actually have positive aspects. We all have our schools we come from. I think I've come across programmers who complain about things I don't understand equally as often that I myself have felt something needs to be complained about.
Make sure you can deduce what you complain about into a concrete disadvantage. If for no other reason so that you can motivate middle management about improvements to make. Things that are hard to deduce into measurable facts usually originates from difference in taste/style rather than quality (there are boooks to read about this subject). The answer posted by smacl have good and concrete advice!
If you can deduce your concern into a real disadvantage, then I really do not agree when people say that one have to "accept" situations like this. I've been exposed to this problem more than once, and let me tell you, refactoring is not the solution to the problem. Refactoring only fixes the symptoms.
Accepting a situation like this is the same as saying "bad quality product lines and expensive and frustrating maintenance is something my company can live with". This is ofcourse seldomly the case. However management (i.e. those with the go/no-go on what projects to prioritize) are very often not technically aware of what the problems are, or why development is expensive. They shouldn't have to be for that matter.
That's why you need a development organization with technical leads, chief architects, a good organisational structure and tiered model etc. Experienced software professionals who have seen where the road leads to if you ignore certain aspects of development. It's about changing the "culture" of your team(s).
Either you stick with your company and try to change how you do things from the roots, or you find another place to work and make sure you find out during the interview exactly how they work in every-day development.
Good luck
I recently faced a very similar problem and a friend gave me some advice that helped a great deal. He said: "keep yourself out of it."
What he meant was, that you must communicate the problems because they are real, costly problems with consequences in terms of time and money. But when you do communicate, talk only about the consequences for the organization. Do not mention the consequences to you, because then it just sounds like whining and will be ignored.
For example:
Not keeping yourself out of it:
"The other developers use these obscure, misleading identifiers and then I have to spend hours going over the code trying to discover what they meant. It's taking up a lot of my time."
Keeping yourself out of it:
"It would be very helpful and cost effective to do some refactoring of class and variable names and also establish some coding standards around identifiers. The immediate payoff will be an easier-to-understand codebase for everyone, leading to better productivity. The longer-term payoff will be that later we'll be able to modify the code and fix things faster. If a critical bug is discovered right before a release, an understandable codebase will be really important."
I hope that helps.
1) Make the problem more visible and get management buy-in
Keep a very detailed diary of the time spent on various coding tasks over the period of about a month. At the end of the month analyse and summarise the contents for your boss, i.e. time wasted and hence money wasted, to illustrate that change of some form is necessary.
2) Think of a cost effective way of moving forward
For example; Rather than refactoring the entire code base, seperate interfaces from implementations, and enforce tighter standards, including unit tests, naming conventions, etc.. at an interface layer. Thus each programmer can have confidence in using code that they have not written. While this is sweeping the crap under the carpet to a certain extent, it is a good way of preparing for larger scale refactoring.
It is important from a management perspective that workflow is not interrupted, and positive results are visible, so plan accordingly.
3) Agree longer terms improvements with your co-workers
Sit down and agree reasonable coding standards for future code with the other programmers.
Perhaps you could setup monthly meetings and at those meetings you could demonstrate good and bad code. Obviously you don't want to point fingers so you'd want to use generic code examples that are based off of stuff you saw in your project. This way you can constructively gather support from others in your style. You might want to compile these after the meetings so people can easily reference them.
I think it is real easy to point out issues, and complain but to mentor people and help them change requires effort. It isn't an easy task but if you are having trouble being motivated with your job perhaps this would give you a nice burst of motivation. You might learn some things a long the way.
You'll find that this is common-place. What you can do is accept that things are done differently by different people. As you fix bugs or add features, you'll get a brief window into a sub-section of the application that you can improve. When you work on the code, you can make it better, and they don't need to know that you're piecemeal improving the code.
Be very careful though. Sometimes code is written in a way that looks 'hacked', but solves a bug that is not easy to discern. Especially if it is older code which has been tried and tested.
On another note, complaining will only get you viewed as a complainer. Think about what outcome you want, and what actions will most likely produce that outcome. You will always hear the answer 'No' when you ask, 'Can I do X-days of work for absolutely no noticable result?'
You could quit and hope to find something better.
Or, you could stick it out and try improve the code that you can control, when you can control it. No matter how well intentioned the developers are, if there is more than one developer the code base will be "ugly" by a competent developers standards. Work with the other developers to improve their abilities and refactor code as you make enhancements.
For starters:
Enforce the use of static code analysis tools. Every language has a few well known tools.
Show some before and after refactored code examples, and explain why you think it's better. Try not to put any one person on the spot.
Code reviews by experienced developers.
keep in mind, some developers can't be helped no matter how much you try...
If someone critiques your code be polite and open minded, you might learn something.
Cyclomatic complexity / number of changesets/bugs. Complex code is more likely to break, cause more bugs which causes more changes, which cost more money!
99% of the time you never get to choose the people you work with. Not all relationships work out, be they work or otherwise.
It would be best if your project was broken up enough so that each developer can contribute to a spec of what the other needs, so programmers don't step on each other's toes.
Getting people to change their coding style is hard. It takes a cast iron technical lead committed to such things and will help when you bring it up. Management types can't do this, leadership needs to provide technical details.
It sounds to me like you don't have a problem with the code so much as your coworkers. It will probably be very difficult for you to force the changes you want to see. Your best bet would probably be to start updating your resume and keep your eyes open for other opportunities
I think that once you're in the middle of the weeds, you do not really have a good chance of getting things done right, you just have to get them done. I would say most developers do not like firefighting and want the ideal code base, but in my opinion this requires you to spend the time up front planning the system out.
I'd recommend trying to work with your manager to ensure that the areas you feel are lacking now are not lacking in the next project. Maybe its putting you on the lead, having more code reviews with peers, maybe it is further training for the entire team.
Either way, I think this is something that most of us go through. I do agree with the other person advising some caution on this. I know that code I wrote yesterday seemed great at the time and looking back on it, can probably find 10 other ways to do it and make it look cleaner.
Have you considered maybe adding fxcop to the automated builds to enforce coding style? Other than that, you could try suggesting TDD which would give the power to whomever writes the test to enfore that the interfaces for each class are structured in a particular way.
Off the top of my head, that's all that I can think of.
Things in life are not perfect and if you start nitpicking, feathers will be ruffled and relationships soured.
The best method is to pick your battles carefully. If something is small enough ignore it and live with it. If it is big and worthwhile (i.e. the management sees ROI in backing you) go for it.
This is apt for your situation...
God, grant me the serenity to accept the things I cannot change courage to change the things I can and the wisdom to know the difference.
One thing I try to do and it may help you. If a part of code is bad, and the idea you propose to fix it is agreed as best but "no time" excuse is given, why dont you rewrite it? say on your own time? If you decide on sticking around at that job for a while it will only help you. And only you will learn and become a better programmer.
Note that it is a good idea and I would even say required, to do a complete code review of that change before check-in and you should try to time the check-in so that it is before a complete regresion test cycle for a release. That way your refactoring is completely tested out. Over a period of 6 months or so, it will start showing a beneficial impact and you can then ask for time allocation for this, with proof to back it up.
The only thing that has a chance of convincing management is demonstrating that the things you are citing as perceived problems become actual problems.
To try to take advantage of this, try to keep the "complainer" tone down to a minimum, that is, focus on how this affects the bottom line rather than how it makes you feel. Point out possible consequences of poor decisions that you see being made. If those consequences come to pass, and they cost more than an up-front fix would have, gently remind management that you foresaw the difficulty and provide a helpful suggestion as to how future similar costs can be avoided with a little up-front effort.
The problem is, in many organizations, the problems will never cause enough of a problem for management to care, or if they do, they won't see the connection between your perception of the problem and the actual problem the way it occurs. In these cases, you end up seemin like a needlessly persnickety technical person, which isn't a reputation you want to have.
So my advice is, pick your battles. If there is something very egregious that others are about to let slip, then you can speak up and perhaps be vindicated later. For the little details that just grind away at you, I'm afraid there's not much you can do but put up with it.
Show them their own forgotten code disguised as yours for critique.
Take an old piece of their code they have forgotten about
Pretend you wrote it
Ask them to figure out something with it
Make sure they point out how bad the code is for whatever reason
Add your own items. Brainstorm what should be done since it's your fault.
Let them know you didn't know how to bring it up to offend them, but it's their code.
If they recall that they wrote it, they might catch on..
If you have a good relationship with your manager, you might be able to use this to work yourself into a "Senior" or "Lead" Developer role. You could propose that it would be best if one person on the team takes technical leadership of the code base. It would be your job to review the code of others and ask them to make improvements when you feel it is necessary. If you go this route, just make sure to take it slowly. If you ask for a lot very quickly, then you could end up pissing off all the other developers.

How not to rush yourself? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I often find that I do a less than complete work on a feature, especially in the Design phase. I detect several reasons:
I'm over-optimistic
I feel the need to provide quick solutions, so sometimes I fool myself into thinking the design is fool-proof when in fact it's still full of holes, just to get the job done faster. Of course I end up paying dearly later.
I'm aware of this behavior of mine for some time, yet I still find I don't manage to compensate. Have you encountered similar problems? How do you approach solving them?
I use a couple of techniques. The first is a simple paper to-do list. In the morning I write down my tasks for the day. I try to work on a task until I can cross it off. I cross it off only when I'm done to my own satisfaction. My to-do list helps me stay focused. When an interruption comes in, I can consciously choose whether it is important enough to interrupt what I'm doing now.
The second technique I use is to give up on the idea of "done" for a design. Instead, I focus on what I've started calling "successions", where a design goes through predictable stages. Each stage supports the current functionality well and will be succeeded at some point by the next stage. This lets me do a good job, a job I can be proud of, without over-designing.
I have the intuition that there is a small catalog of such successions (like http://www.threeriversinstitute.org/FirstOneThenMany.html) that would cover most of design. In the meantime, I try to remember that "sufficient to the day are the troubles thereof".
I run into this problem a lot.
My solution is a notebook. (The old fashioned paper kind).
I write out how I'm planning on implementing the solution as an bulleted overview list, and then I try and flesh out each point on the list.
Often, during that process, I come across issues I hadn't thought of.
Of course, the 80/20 rule still applies... I still come across things when I'm actually doing the implementation that hadn't occurred to me, but with experience these tend to diminish.
EDIT: If I'm still not sure at the end of this process, I put together a throwaway prototype testbed... It's important to make sure it's throwaway, because otherwise you run the risk of including some nasty hacks in your real codebase.
It's very common to miss edge-cases and detail when you're in the planning phase of a project, especially in the software development field. Please don't feel that this is a personal failing; it's something endemic.
To counter this, many software development methodologies have emerged. Most recently there has been a shift by many development teams to 'agile' methods, where there is a focus on rapid development with little up-front technical design (after all, many complexities are only discovered when you actually begin developing). I'm currently using the Scrum system, which has been excellent in my small team:
http://en.wikipedia.org/wiki/Agile_methods
http://en.wikipedia.org/wiki/Scrum_%28development%29
If you find that your organisation will not accept what they may regard as a radical shift in approach, it may be worth investigating whether they will agree to the development of a prototype system. This means that you could code up a feature to investigate the technologies involved and judge whether it's feasible, without having to commit to full development, a quality bar, testing schedules etc. The prototype should be thrown away once the feasibility has been proved or disproved, then proper development may begin, including all that you've learned in the process.
If your problem is more related to time management, then I'd recommend the Getting Things Done approach (http://en.wikipedia.org/wiki/Getting_things_done). This is pragmatic and simple, concentrating on making you productive without overloading you with information that isn't immediately relevant to your current work. I've found that I get overwhelmed with project/feature ideas at times and it really helps to write everything down and file it for a later time when I have the resources available to work effectively.
I hope this helps and best of luck!
Communication.
The best way to not rush yourself into programming mistakes is communication. Yes, good ol' fashioned accountability. If another person in the office is involved in the process, the better the outcome. If a programmer just takes on the task without any concern for anybody else, then there is a higher possiblity for mistakes.
Accountability Checklist:
How do we support this?
Who needs to know what has changed?
Why are we doing this in the first place?
Will there be anybody who doesn't want this changed?
Will someone else understand how I did this?
How will the user perceive and use this change?
A skepticle comrad is usually good enough to help. Functional Specifications are good, they usually answer all of these thoughts. But, sometimes a conversation with another person can help you with it and you can get changes out the door faster.
I have learned, through years of mistakes (though still making them), that almost anything I want to use repeatedly, or distribute, needs to be designed properly. So getting burned enough times will end your optimism.
When getting pressure from management, I tell them I will have to put in the thought anyway, so I should do it when it's cheap. I think on paper as well, so I can actually prove that I'm doing something and it keeps my fingers on the keyboard, both of which provides a soothing effect to management. ;-)
At the risk of sounding obvious - be pessimistic. I had a few experiences where I thought "that should take a few hours" and it ended up taking a couple days because of all the little things that pop up unexpectedly.
By far the best way I've found to manage things is to (much like Andrew's answer) write out the design and requirements as a starting point. Then I go through and look for weak points in the design, gotchas and additional use cases etc. I try to look at this as a critical exercise - there's no code written yet, so this is the time to be totally ruthless and look for every weak point. Look for error conditions you'll have to handle, and whatever amount of time you think it will take to complete each feature/function, pad that amount by a lot. I've had times where I've doubled my initial estimate and still not been that far off the mark.
It's very hard as a programmer to realistically project debugging time - writing the code is easy to estimate, but debugging that into functioning, valid code is something else entirely. Therefore I find there's no exact science to it but I just pad tasks by a whole bunch, so that I have plenty of breathing room for debugging.
See also Evidence Based Scheduling which is a fascinating concept in scheduling developed by FogCreek for their FogBugz product.
You and the rest of the world.
You need more a more detailed design, more accurate estimate, and the willingness to accept that sometimes the optimal solution is not necessarily the best solution (e.g., you could code some loop in assembler to get optimal performance, but that's going to take a lot longer than just doing
for (i=1; i<=10; i++) {}
). Is the time spent doing it really worth it for an accounting package over a missile system.
I like to designing, but over time I've found that much design up front is a lot like building castles into the sky - it's too much speculation, however well-educated, missing critical feedback from actually implementing and using the design.
So today I'm much more into accepting that while implementing a design I will learn a lot of new stuff about it, and need to feed that learning back into the design. Doing that is a skill that is fun to learn, including the skills to keep a design flexible by keeping it simple, free of duplication and cohesive and decoupled, of changing the design in small, controlled steps (=refactoring), and writing the necessary extensive suite of automated tests that make this kind of changes safe.
This seems to be a much more effective approach to me than getting better at "up front design speculation" - and addtionally it makes me equally well prepared for the inevitable moment when the design needs to be changed due to a simply unforseeable change in the requirements.
Divide, divide, divide. List all the steps that will be required to finish the project, then list all the steps those steps will require to be concluded, and so on until you reach atomic items you are absolutely sure you can finish in a day or less. Add the duration of all these values to arrive at a length of time.
Then double it. Now you have a number that, if depressing, is at least somewhat realistic.
If possible "Sleep on your design" before publishing it. I find after I leave work, I usually think of things I have missed. This usually happens while I am lying in bed before falling asleep or even while showering the next day.
I also find it valuable to have a peer/friend that I trust review what I have before distributing it. Somebody else almost always sees something I didn't think of or miscommunicated.
I like to do as others stated here. Write down in pseudo code what the flow of your app will be. This immediately highlights some detailed areas that may require further attention that where not apparent up front.
Pseudo code is also readable to business users who can verify your approach meets their needs.
Using pseudo code also creates a nice set of methods that could be put to use as an interface in the final solution. Once the pseudo code is fairly tight, look for patterns and review some common GOF patterns. They do not have to be perfect but using them will sheild you from having to rewrite the code later during the revisions that are bound to come along.
Just taking an hour or two write psuedo code, yields some invaluable time saving pieces later on:
1. An object model emerges
2. The program's flow is clearly defined for others
3. It can be used as documentation of your design with some refinement
4. Comments are easier to add and will be clearer for someone else reviewing your code.
Best of luck to you!
I've found that the best way to make sure you've chosen a good design is to make sure that you understand the problem, know the limitations you have, and know what things are must-haves vs. nice-to-haves.
Understanding the problem will involve talking to the people who have the need and keeping them anchored to what needs to get done first instead of how they think it ought to get done. Once you know what actually has to happen, you can go back and talk over requirements about how.
Knowing your limitations may be quite easy: needs to run on the iPhone; has to be a web application; needs to integrate with the already-existing Java code and deployment setup; and so on. It may be quite difficult: you don't know what the potential size of your user base is (hundreds? thousands? millions?); you don't know whether you'll need to localize it (though if you're not sure, assume you will have to).
Must-haves vs, nice-to-haves: this is possibly the most difficult part. Users very often have emotional attachments to "requirements" ("It should look just like Excel") that are not actually part of the "has to happen" stuff. You often have to juggle functionality vs. desires to get an acceptable implementation. You can't always give everyone a pony.
Make sure you write all this down! Even if it evolves along the way, or the design is small, having a "this is what we're planning to do now" guide to refer to when you need ot make a decision about committing resources makes it easier to restrain yourself from implementing a really cool whiz-bang feature instead of a boring must-do.
Since you recognize that you feel the need to provide a quick solution, perhaps it will slow you down to realize that you can probably solve the problem faster and deliver it sooner if you spend more upfront time in design. For instance if you spend 3 hours designing and 30 hours writting code, it probably means that if you spend 6 hours designing you might need to only spend 10 hours writing code. (These are not actual figures just examples). You might try to quantify this for yourself on the next few projects you do. Do a couple where you behave as you normally would and see what ratio of design/codewriting/testing&debugging you actually do. Then on the next project deliberately increase the percentage of time you spend on design phase and see if it does shorten the time needed for the other phases. You will have to try for several projects on this as well to get a true baseline since the projects may be quite different. Do it as a test to see if you can improve your performance on the the other phases and thus deliver a faster product if you spend 20% more time or 50% more time or 100% more time on design.
Remember the later in the process you find the problem with a design the harder (and more time-consuming) it is to fix.

How do you prevent over complicated solutions or designs? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Many times we find ourselves working on a problem, only to figure out the solution being created is far more complex than the problem requires. Are there controls, best practices, techniques, etc that help you control over complication in your workplace?
Getting someone new to look at it.
In my experience, designing for an overly general case tends to breed too much complexity.
Engineering culture encourages designs that make fewer assumptions about the environment; this is usually a good thing, but some people take it too far. For example, it might be nice if your car design doesn't assume a specific gravitational pull, nobody is actually going to drive your car on the moon, and if they did, it wouldn't work, because there is no oxygen to make the fuel burn.
The difficult part is that the guy who is developed the "works-on-any-planet" design is often regarded as clever, so you may have to work harder to argue that his design is too clever.
Understanding trade-offs, so you can make the decision between good assumptions and bad assumptions, will go a long way into avoiding a needlessly complicated design.
If its too hard to test, your design is too complicated. That's the first metric I use.
Here are some ideas to get design more simpler:
read some programming books and articles, and then apply them in your work and write code
read lots of code (good and bad) written by other people (like Open Source projects) and learn to see what works and what does not
build safety nets (unit tests) to enable experimentations with your code
use version control to enable rollback, if those experimentations take wrong turn
TDD (test driven development) and BDD (behaviour driven development)
change your attitude, ask how you can make it so, that "it simply works" (convention over configuration could help there; or ask how Apple would do it)
practice (like jazz players -- jam with code, try Code Kata)
write same code multiple times, with different languages and after some time has passed
learn new languages with new concepts (if you use static language, learn dynamic one; if you use procedural language, learn functional one; ...) [one language per year is about right]
ask someone to review you code and actively ask how you can make your code simpler and more elegant (and then make it)
get years under your belt by doing above things (time helps active mind)
I create a design etc., and then I look at it and try and remove (agressively) everything that doesn't seem to be needed. If it turns out I need it later when I am polishing the design I add it back in. I do this over several iterations, refining as I go along.
Read "Working Effectively With Legacy Code" by Michael C. Feathers.
The point is, if you have code that works, and you need to change the design, nothing works better than making your code unit testable, and breaking your code into smaller pieces.
Using Test Driven Development and following Robert C. Martin's Three Rules of TDD:
You are not allowed to write any production code unless it is to make a failing unit test pass.
You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
In this way you are not likely to get much code that you don't need. You will always be focused on making one important thing work and won't ever get too far ahead of yourself in terms of complexity.
Test first may help here, but it is not suitable for all situation. And it's not a panacea anyway.
Start small is another great idea. Do you really need to stuff all 10 design patterns into this thing? Try first to do it "stupid way". Doesn't quite cut it? Okay, do it "slightly less stupid way". Etc.
Get it reviewed. As someone else wrote, two pairs of eyes are better. Even better are two brains. Your mate may just see a room for simplification, or a problematic area you thought was fine just because you spend many hours hacking it.
Use lean language. Languages such as Java, or sometimes C++ sometimes seem to encourage nasty, convoluted solutions. Simple things tend to span over multiple lines of code, and you just need to use 3 external libraries and a big framework to manage it all. Consider using Python, Ruby, etc. - if not for your project, then for some private use. It can change your mindset to favor simplicity, and to be assured that simplicity is possible.
Reduce the amount of data you're working with by serialising the task into a series of smaller tasks. Most people can only hold half a dozen (plus or minus) conditions in their head while coding, so make that the unit of implementation. Design for all the tasks you need to accomplish, but then ruthlessly hack the design so that you never have to play with more than half a dozen paths though the module.
This follows from Bendazo's post - simplify until it becomes easy.
It is inevitable once you have been a programmer that this will happen. If you seriously have unestimated the effort or hit a problem where your solution just doesn't work then stop coding and get talking to your project manager. I always like to take the solutions with me to the meeting, problem is A, you can do x which will take 3 days or we can try y which will take 6 days. Don't make the choice yourself.
Talk to other programmers every step of the way. The more eyes there are on the design, the more likely an overcomplicated aspect is revealed early, before it becomes too ossified in the codebase.
Constantly ask yourself how you will use whatever you are currently working on. If the answer is that you're not sure, stop to rethink what you're doing.
I've found it useful to jot down thoughts about how to potentially simplify something I'm currently working on. That way, once I actually have it working, it's easier to go back and refactor or redo as necessary instead of messing with something that's not even functional yet.
This is a delicate balancing act: on the one hand you don't want something that takes too long to design and implement, on the other hand you don't want a hack that isn't complicated enough to deal with next week's problem, or even worse requires rewriting to adapt.
A couple of techniques I find helpful:
If something seems more complex than you would like then never sit down to implement it as soon as you have finished thinking about it. Find something else to do for the rest of the day. Numerous times I end up thinking of a different solution to an early part of the problem that removes a lot of the complexity later on.
In a similar vein have someone else you can bounce ideas off. Make sure you can explain to them why the complexity is justified!
If you are adding complexity because you think it will be justified in the future then try to establish when in the future you will use it. If you can't (realistically) imagine needing the complexity for a year or three then it probably isn't justifiable to pay for it now.
I ask my customers why they need some feature. I try and get to the bottom of their request and identify the problem they are experiencing. This often lends itself to a simpler solution than I (or they) would think of.
Of course, if you know your clients' work habits and what problems they have to tackle, you can understand their problems much better from the get-go. And if you "know them" know them, then you understand their speech better. So, develop a close working relationship with your users. It's step zero of engineering.
Take time to name the concepts of the system well, and find names that are related, this makes the system more familiar. Don't be hesitant to rename concepts, the better the connection to the world you know, the better your brain can work with it.
Ask for opinions from people who get their kicks from clean, simple solutions.
Only implement concepts needed by the current project (a desire for future proofing or generic systems make your design bloated).

Resources