Can using multiple analytic packages become a problem? - performance

We are launching a new company next month, and I am curious about what analytic package to use. My current thought is maybe just use two or three together. Would there be any serious performance issues by doing so?
Specifically we're thinking of: Reinvigorate and Kiss Metrics
And maybe Google Analytics too.

Installing multiple web analytic packages is not necessarily a bad thing. I see clients do it all the time. The main benefit is that in case one tool goes down or has some kind of issue, you have a backup. Another benefit is that one tool may offer a lot of insight for you but maybe it lacks in certain features that another tool has. Or you have different stakeholders asking different questions and sometimes its easier to just use a separate tool for separate people to avoid conflict in data between the two.
HOWEVER
Please Please PLEASE do not fall into the way too common trap of trying to compare numbers between tools! Or at least, try to suppress your gut reaction to ask why there is a difference. There is a 95% chance you will needlessly waste much time and resources on everybody's part trying to track down a phantom bug that more than likely does not exist, or is otherwise unexplainable and falls under the acceptable margin of error category.

I've seen commercial (major brand company-type) sites use somewhere around 15-20+ simultaneously according to ghostery. Doesn't really seem like it, async them all

Related

How to assemble a project with software products and your own code

Let's say you have a specific project on hand, it can be divided to parts, and you are not completely sure about all the difficulties that will arise.
Time is of the essence.
How do you decide whether a part should use software product or your own code? (considering, that some tools are awesome, but will require much time to learn)
How do you choose the right software product?
How much time (as a percentage) should this stage of choosing the right product, if any, take, and how much time to evaluate a single product?
Is there a way-back, is it o.k to change your mind, after putting efforts in a product, and finding it not suitable?
I would love to hear any rules of thumb about those.
Changing your decisions is like changing your blueprint for a house while it's already being built.
It will entirely depend on what you have spent in time and money to that point.
Some considerations:
0) Understand the problem in clear and simple terms before beginning. Know what's critical to it's success and then use that list to see if any software, language, or tool will aid it, and at what cost, and if the cost outweighs the benefit.
1) Use a crammer's schedule. Build it in the order of what you would build if you only had 1 day or 1 week and no more to work on it. It's amazing how much doesn't matter anymore when you have to do 50% of the features at 100% of the quality. Focus on value, value, value. Read something like 37 Signal's book Getting Real for more on this.
2) Do not re-invent the wheel. It's always easier it seems to build something from scratch. Unless you are doing a fraction of the implementation and it's truly simpler, meaning you can avoid abstraction until you forget what you were building, consider it. If you can build it faster, better, cheaper and in the same amount of time, do it.
3) Know the features of your tools, and the benefits any tools need to give your solution. You should be familiar with or at least aware of many of the tools out there that you may or may not integrate.
4) Pick a language that is used to solve a lot of problems. Chances are you will find many great libraries and tools to build your software that will save your time. If you need something that delivers, can run, and you can lean on the smarts of others, use something established, or a language that can access .NET or Java easily if need be.
For each part of your software you recognize as a software component/package:
How do you decide whether a part should use software product or your own code?
(considering, that some tools are awesome, but will require much time to learn)
Ask yourself whether the component you are considering is a part of your product's main business core.
If not then it is usually better to use an existing solution and not send too much time on it.
If it is then make sure there is no existing product that is better than what you are planning. - It there is, consider purchasing licenses to it instead of developing your product.
Search online for similar components (commercial, open source and even articles/demo-source-code).
Do any of them implement all of your requirements from the components?
How much do they cost, would it cost you more to develop and maintain a similar component?
What are the license conditions? - Are they OK for your product?
If component includes a user-interface, is it plesent to look at and easy to use?
If you answered yes to all the above then do not develop the component yourself.
If not:
Is the component open source or published in an article / demo-code? - If so, it robust, could you take the code an improve it or use it as an example to help you write code that is more suitable for your requirements? - If so write your own code, use code as part of your own component that is not developed from scratch.
If your answer to the above is no, then you'll have to develop your own (or you're searching in the wrong places).
How do you choose the right software product?
See answers to 1.
How much time (as a percentage) should this stage of choosing the right product, if any, take, and how much time to evaluate a single product?
Clear an entire day, search for existing components, read about them (features, prices, reviews) and download + install up to 5 of them.
Clear another day evaluate 2-3 products, compare demos/examples, look at code, write 2 small examples of using each (same example different product).
If you choose more than 3, clear another day and test the others.
Is there a way-back, is it o.k to change your mind, after putting efforts in a product, and finding it not suitable?
Always design your software so that every component is replaceable.
This guarantees that there is always "a way back".
(Use interfaces & adapter design pattern, divide to many assemblies, connect all components as loosely as possible (using events, binding, as etc.) - loose coupling.
Even if you implement something yourself make sure there is a way back - sometime you may use the wrong technology/design and have to replace a component with a new one you develop/purchase.
Other rules of thumb:
Consider which application-wide technologies to use before considering each component.
Writing in assembly would take the longest, in C less, in C++ even less, in more modern languages such as C#, Java, Delphi even less.
Which has more of the self components that are relevant to you? What does your team have experience in.
If you are using .NET (C#), then WPF could help you lower the coupling between GUI and business logic and make a better looking GUI, however it take time to learn how to use it (a 5 day minimum course is very much recommended).
As in any art the difficulty is composing a good solution based on a very large possible solutions space. There as many ways to go about this as there are developers.
I’d normally spend some time understanding the problem and stating it clearly and succinctly as possible, preferably in a written form. The problem description should be completely abstracted away from any possible solutions. Next I’d normally list available constraints that will need to be applied to the solution (time, budget, legal, political, performance, usability, skill availability within team and so on).
Then the theory goes that you need to look on the market for something that solves the problem and meets the constraints at the same time. In practise, the process is not that straight-forward: you try to identify market categories that are likely to be useful, then research them, see what is available and continuously try to reduce the gap between the constraints and capabilities as much as possible, often by going back and revisiting and re-negotiating the constraints.
A few generic tips:
During the research keep coming back to the original problem.
There is always more than one solution, try to extend breadth (concentrating on very different ways of solving the problem) of the search space before going deeper.
Be clear on a number of options it’s worth researching, and amount of time worth spending on each of them before making a decision whether to investigate further.
It’s seldom worth finding an optimal solution, especially then technological landscape keeps changing very rapidly. Look for a solution that is good enough: “The Paradox of Choice - Why More is Less”.
It’s rarely worth turning to users for help (unless they are software experts) on choosing between several options. If you’ve got a number of options all looking equally attractive that means you need to go back and understand the original problem better, it’s likely you’ve missed a requirement or two.
Some further notes on using third-party components (refers to GUI components, but easy to apply to other software areas as well).
And even more notes on scoping, composing and researching for a project.
How do you decide whether a part should use software product or your own code? (considering, that some tools are awesome, but will require much time to learn)
Ask your self two questions.
1) Is it a mature product. If yes, then
2) How long it would take to create the functionality it provides on your own. If that value times your hourly rate is greater than the cost of the product, then use that product.
How do you choose the right software product?
Consult your network of other developers. Have they used it, did they run into problems. Consult the interweb. Create a prototype using the product. Does it work well? Any major bugs?
How much time (as a percentage) should this stage of choosing the right product, if any, take, and how much time to evaluate a single product?
It depends on the size of the project, and the criticality of the product to the success. Most of the time, you are going to be able to get a high level view of the product in a very short amount of time.
It may be just a few minutes using it before you say, nope - not ready for prime time. If it makes past that, a day or two of experimentation may tell you that it passes muster for your project.
If it's a huge project with many developers, then you probably want to spend more time doing a prototype application with it to be sure it's worth investing all that time in.
Is there a way-back, is it o.k to change your mind, after putting efforts in a product, and finding it not suitable?
If you find it's not working out, there's nothing wrong with going back. In fact you probably have to. Ideally you will find this out early. Not at the 11th hour. Again, this is the purpose of prototyping.
There are already some really good answers here, so I won't repeat it, however there is one point you should definitely consider, and though I would have thought its obvious I havent seen it mentioned here yet:
The personnel you have available to implement the solution, their core competency, and their general level of competence.
Who you have to implement this (assuming it's a team, and not just yourself - but relevant even if its just you, too...) can have a HUGE effect on the outcome. If you don't have experienced programmers to help you develop this, you're better off looking for some OTS product to do the work for you... Or, even if you have programmers who are not likely to succeed, you still might want to find a solution with lower overall project risk.

Standard methods of debugging

What's your standard way of debugging a problem? This might seem like a pretty broad question with some of you replying 'It depends on the problem' but I think a lot of us debug by instinct and haven't actually tried wording our process. That's why we say 'it depends'.
I was sort of forced to word my process recently because a few developers and I were working an the same problem and we were debugging it in totally different ways. I wanted them to understand what I was trying to do and vice versa.
After some reflection I realized that my way of debugging is actually quite monotonous. I'll first try to be able to reliably replicate the problem (especially on my local machine). Then through a series of elimination (and this is where I think it's problem dependent) try to identify the problem.
The other guys were trying to do it in a totally different way.
So, just wondering what has been working for you guys out there? And what would you say your process is for debugging if you had to formalize it in words?
BTW, we still haven't found out our problem =)
My approach varies based on my familiarity with the system at hand. Typically I do something like:
Replicate the failure, if at all possible.
Examine the fail state to determine the immediate cause of the failure.
If I'm familiar with the system, I may have a good guess about to root cause. If not, I start to mechanically trace the data back through the software while challenging basic assumptions made by the software.
If the problem seems to have a consistent trigger, I may manually walk forward through the code with a debugger while challenging implicit assumptions that the code makes.
Tracing the root cause is, of course, where things can get hairy. This is where having a dump (or better, a live, broken process) can be truly invaluable.
I think that the key point in my debugging process is challenging pre-conceptions and assumptions. The number of times I've found a bug in that component that I or a colleague would swear is working fine is massive.
I've been told by my more intuitive friends and colleagues that I'm quite pedantic when they watch me debug or ask me to help them figure something out. :)
Consider getting hold of the book "Debugging" by David J Agans. The subtitle is "The 9 Indispensable Rules for Finding Even the Most Elusive Software and Hardware Problems". His list of debugging rules — available in a poster form at the web site (and there's a link for the book, too) is:
Understand the system
Make it fail
Quit thinking and look
Divide and conquer
Change one thing at a time
Keep an audit trail
Check the plug
Get a fresh view
If you didn't fix it, it ain't fixed
The last point is particularly relevant in the software industry.
I picked those on the web or some book which I can't recall (it may have been CodingHorror ...)
Debugging 101:
Reproduce
Progressively Narrow Scope
Avoid Debuggers
Change Only One Thing At a Time
Psychological Methods:
Rubber-duck debugging
Don't Speculate
Don't be too Quick to Blame the Tools
Understand Both Problem and Solution
Take a Break
Consider Multiple Causes
Bug Prevention Methods:
Monitor Your Own Fault Injection Habits
Introduce Debugging Aids Early
Loose Coupling and Information Hiding
Write a Regression Test to Prevent Re occurrence
Technical Methods:
Inert Trace Statements
Consult the Log Files of Third Party Products
Search the web for the Stack Trace
Introduce Design By Contract
Wipe the Slate Clean
Intermittent Bugs
Explot Localility
Introduce Dummy Implementations and Subclasses
Recompile / Relink
Probe Boundary Conditions and Special Cases
Check Version Dependencies (third party)
Check Code that Has Changed Recently
Don't Trust the Error Message
Graphics Bugs
When I'm up against a bug that I can't get seem to figure out, I like to make a model of the problem. Make a copy of the section of problem code, and start removing features from it, one at a time. Run a unit test against the code after every removal. Through this process your will either remove the feature with the bug (and hence, locate the bug), or you will have isolated the bug down to a core piece of code that contains the essence of the problem. And once you figure out the essence of the problem, its a lot easier to fix.
I normally start off by forming an hypothesis based on the information I have at hand. Once this is done, I work to prove it to be correct. If it proves to be wrong, I start off with a different hypothesis.
Most of the Multithreaded synchronization issues get solved very easily with this approach.
Also you need to have a good understanding of the debugger you are using and its features. I work on Windows applications and have found windbg to be extremely helpful in finding bugs.
Reducing the bug to its simplest form often leads to greater understanding of the issue as well adding the benefit of being able to involve others if necessary.
Setting up a quick reproduction scenario to allow for efficient use of your time to test any hypothosis you chose.
Creating tools to dump the environment quickly for comparisons.
Creating and reproducing the bug with logging turned onto the maximum level.
Examining the system logs for anything alarming.
Looking at file dates and timestamps to get a feeling if the problem could be a recent introduction.
Looking through the source repository for recent activity in the relevant modules.
Apply deductive reasoning and apply the Ockham's Razor principles.
Be willing to step back and take a break from the problem.
I'm also a big fan of using process of elimination. Ruling out variables tremendously simplifies the debugging task. It's often the very first thing that should to be done.
Another really effective technique is to roll back to your last working version if possible and try again. This can be extremely powerful because it gives you solid footing to proceed more carefully. A variation on this is to get the code to a point where it is working, with less functionality, than not working with more functionality.
Of course, it's very important to not just try things. This increases your despair because it never works. I'd rather make 50 runs to gather information about the bug rather take a wild swing and hope it works.
I find the best time to "debug" is while you're writing the code. In other words, be defensive. Check return values, liberally use assert, use some kind of reliable logging mechanism and log everything.
To more directly answer the question, the most efficient way for me to debug problems is to read code. Having a log helps you find the relevant code to read quickly. No logging? Spend the time putting it in. It may not seem like you're finding the bug, and you may not be. The logging might help you find another bug though, and eventually once you've gone through enough code, you'll find it....faster than setting up debuggers and trying to reproduce the problem, single stepping, etc.
While debugging I try to think of what the possible problems could be. I've come up with a fairly arbitrary classification system, but it works for me: all bugs fall into one of four categories. Keep in mind here that I'm talking about runtime problems, not compiler or linker errors. The four categories are:
dynamic memory allocation
stack overflow
uninitialized variable
logic bug
These categories have been most useful to me with C and C++, but I expect they apply pretty well elsewhere. The logic bug category is a big one (e.g. putting a < b when the correct thing was a <= b), and can include things like failing to synchronize access among threads.
Knowing what I'm looking for (one of these four things) helps a lot in finding it. Finding bugs always seems to be much harder than fixing them.
The actual mechanics for debugging are most often:
do I have an automated test that demonstrates the problem?
if not, add a test that fails
change the code so the test passes
make sure all the other tests still pass
check in the change
No automated testing in your environment? No time like the present to set it up. Too hard to organize things so you can test individual pieces of your program? Take the time to make it so. May make it take "too long" to fix this particular bug, but the sooner you start, the faster everything else'll go. Again, you might not fix the particular bug you're looking for but I bet you find and fix others along the way.
My method of debugging is different, probably because I am still beginner.
When I encounter logical bug I seem to end up adding more variables to see which values go where and then I go and debug line by line in the piece of code that causing a problem.
Replicating the problem and generating a repeatable test data set is definitely the first and most important step to debugging.
If I can identify a repeatable bug, I'll typically try and isolate the components involved until I locate the problem. Frequently I'll spend a little time ruling out cases so I can state definitively: The problem is not in component X (or process Y, etc.).
First I try to replicate the error, without being able to replicate the error it is basically impossible in a non-trivial program to guess the problem.
Then if possible, break out the code in a separate standalone project. There are several reasons for this: If the original project is big it quite difficult to debug second it eliminates or highlights any assumptions about the code.
I normally always have another copy of VS open which I use for the debugging parts in mini projects and to test routines which I later add to the main project.
Once having reproduced the error in the separate module the battle is almost won.
Sometimes it is not easy to break out a piece of code so in those cases I use different methods depending on how complex the issue is. In most cases assumptions about data seem to come and bite me so I try to add lots of asserts in the code in order make sure my assumptions are correct. I also disabling code by using #ifdef until the error disappears. Eliminating dependencies to other modules etc... sort of slowly circling in the bug like a vulture ..
I think I don't have really a conscious way of doing it, it varies quite a lot but the general principle is to eliminate the noise around the issue until it is quite obvious what it is. Hope I didn't sound too confusing :)

How to justify to your colleagues that they produce crappy code?

I am finding somewhat difficult to carry on working in my current job.
The codebase has become a bit wild lately (but definitely not the worse I've seen), and I'm having a hard time dealing with some parts of the code. I could be stupid, but most likely it's just that it demotivates me a lot to start working on something that is hard to reason about.
My boss is already aware of my thoughts - I expressed what it feels like to work like this. He asked me to provide examples of what was wrong. When I pointed out two or three small issues, he said "yeah, ok" but that refactoring costs him a lot of money, and that we have to get the product out (not the first time I hear this).
I have to admit that the examples were not the most compelling, but the problem is actually tough to explain. It's made up of a lot of tiny "bad decisions" throughout the codebase. (We also see this issue is absolutely subjective). For instance, bad naming, dealing with nulls, boilerplate, not making code reusable (or the opposite) and so on. It can be tiring to re-think someone else's code over again to justify I would have done it differently.
Do you have thoughts on how to deal with this?
I am a bit fed up of having to go hacking around a quick 'n dirty codebase every time!
Sometimes your fellow programmers do things very differently than you, and things you might feel are way wrong might actually have positive aspects. We all have our schools we come from. I think I've come across programmers who complain about things I don't understand equally as often that I myself have felt something needs to be complained about.
Make sure you can deduce what you complain about into a concrete disadvantage. If for no other reason so that you can motivate middle management about improvements to make. Things that are hard to deduce into measurable facts usually originates from difference in taste/style rather than quality (there are boooks to read about this subject). The answer posted by smacl have good and concrete advice!
If you can deduce your concern into a real disadvantage, then I really do not agree when people say that one have to "accept" situations like this. I've been exposed to this problem more than once, and let me tell you, refactoring is not the solution to the problem. Refactoring only fixes the symptoms.
Accepting a situation like this is the same as saying "bad quality product lines and expensive and frustrating maintenance is something my company can live with". This is ofcourse seldomly the case. However management (i.e. those with the go/no-go on what projects to prioritize) are very often not technically aware of what the problems are, or why development is expensive. They shouldn't have to be for that matter.
That's why you need a development organization with technical leads, chief architects, a good organisational structure and tiered model etc. Experienced software professionals who have seen where the road leads to if you ignore certain aspects of development. It's about changing the "culture" of your team(s).
Either you stick with your company and try to change how you do things from the roots, or you find another place to work and make sure you find out during the interview exactly how they work in every-day development.
Good luck
I recently faced a very similar problem and a friend gave me some advice that helped a great deal. He said: "keep yourself out of it."
What he meant was, that you must communicate the problems because they are real, costly problems with consequences in terms of time and money. But when you do communicate, talk only about the consequences for the organization. Do not mention the consequences to you, because then it just sounds like whining and will be ignored.
For example:
Not keeping yourself out of it:
"The other developers use these obscure, misleading identifiers and then I have to spend hours going over the code trying to discover what they meant. It's taking up a lot of my time."
Keeping yourself out of it:
"It would be very helpful and cost effective to do some refactoring of class and variable names and also establish some coding standards around identifiers. The immediate payoff will be an easier-to-understand codebase for everyone, leading to better productivity. The longer-term payoff will be that later we'll be able to modify the code and fix things faster. If a critical bug is discovered right before a release, an understandable codebase will be really important."
I hope that helps.
1) Make the problem more visible and get management buy-in
Keep a very detailed diary of the time spent on various coding tasks over the period of about a month. At the end of the month analyse and summarise the contents for your boss, i.e. time wasted and hence money wasted, to illustrate that change of some form is necessary.
2) Think of a cost effective way of moving forward
For example; Rather than refactoring the entire code base, seperate interfaces from implementations, and enforce tighter standards, including unit tests, naming conventions, etc.. at an interface layer. Thus each programmer can have confidence in using code that they have not written. While this is sweeping the crap under the carpet to a certain extent, it is a good way of preparing for larger scale refactoring.
It is important from a management perspective that workflow is not interrupted, and positive results are visible, so plan accordingly.
3) Agree longer terms improvements with your co-workers
Sit down and agree reasonable coding standards for future code with the other programmers.
Perhaps you could setup monthly meetings and at those meetings you could demonstrate good and bad code. Obviously you don't want to point fingers so you'd want to use generic code examples that are based off of stuff you saw in your project. This way you can constructively gather support from others in your style. You might want to compile these after the meetings so people can easily reference them.
I think it is real easy to point out issues, and complain but to mentor people and help them change requires effort. It isn't an easy task but if you are having trouble being motivated with your job perhaps this would give you a nice burst of motivation. You might learn some things a long the way.
You'll find that this is common-place. What you can do is accept that things are done differently by different people. As you fix bugs or add features, you'll get a brief window into a sub-section of the application that you can improve. When you work on the code, you can make it better, and they don't need to know that you're piecemeal improving the code.
Be very careful though. Sometimes code is written in a way that looks 'hacked', but solves a bug that is not easy to discern. Especially if it is older code which has been tried and tested.
On another note, complaining will only get you viewed as a complainer. Think about what outcome you want, and what actions will most likely produce that outcome. You will always hear the answer 'No' when you ask, 'Can I do X-days of work for absolutely no noticable result?'
You could quit and hope to find something better.
Or, you could stick it out and try improve the code that you can control, when you can control it. No matter how well intentioned the developers are, if there is more than one developer the code base will be "ugly" by a competent developers standards. Work with the other developers to improve their abilities and refactor code as you make enhancements.
For starters:
Enforce the use of static code analysis tools. Every language has a few well known tools.
Show some before and after refactored code examples, and explain why you think it's better. Try not to put any one person on the spot.
Code reviews by experienced developers.
keep in mind, some developers can't be helped no matter how much you try...
If someone critiques your code be polite and open minded, you might learn something.
Cyclomatic complexity / number of changesets/bugs. Complex code is more likely to break, cause more bugs which causes more changes, which cost more money!
99% of the time you never get to choose the people you work with. Not all relationships work out, be they work or otherwise.
It would be best if your project was broken up enough so that each developer can contribute to a spec of what the other needs, so programmers don't step on each other's toes.
Getting people to change their coding style is hard. It takes a cast iron technical lead committed to such things and will help when you bring it up. Management types can't do this, leadership needs to provide technical details.
It sounds to me like you don't have a problem with the code so much as your coworkers. It will probably be very difficult for you to force the changes you want to see. Your best bet would probably be to start updating your resume and keep your eyes open for other opportunities
I think that once you're in the middle of the weeds, you do not really have a good chance of getting things done right, you just have to get them done. I would say most developers do not like firefighting and want the ideal code base, but in my opinion this requires you to spend the time up front planning the system out.
I'd recommend trying to work with your manager to ensure that the areas you feel are lacking now are not lacking in the next project. Maybe its putting you on the lead, having more code reviews with peers, maybe it is further training for the entire team.
Either way, I think this is something that most of us go through. I do agree with the other person advising some caution on this. I know that code I wrote yesterday seemed great at the time and looking back on it, can probably find 10 other ways to do it and make it look cleaner.
Have you considered maybe adding fxcop to the automated builds to enforce coding style? Other than that, you could try suggesting TDD which would give the power to whomever writes the test to enfore that the interfaces for each class are structured in a particular way.
Off the top of my head, that's all that I can think of.
Things in life are not perfect and if you start nitpicking, feathers will be ruffled and relationships soured.
The best method is to pick your battles carefully. If something is small enough ignore it and live with it. If it is big and worthwhile (i.e. the management sees ROI in backing you) go for it.
This is apt for your situation...
God, grant me the serenity to accept the things I cannot change courage to change the things I can and the wisdom to know the difference.
One thing I try to do and it may help you. If a part of code is bad, and the idea you propose to fix it is agreed as best but "no time" excuse is given, why dont you rewrite it? say on your own time? If you decide on sticking around at that job for a while it will only help you. And only you will learn and become a better programmer.
Note that it is a good idea and I would even say required, to do a complete code review of that change before check-in and you should try to time the check-in so that it is before a complete regresion test cycle for a release. That way your refactoring is completely tested out. Over a period of 6 months or so, it will start showing a beneficial impact and you can then ask for time allocation for this, with proof to back it up.
The only thing that has a chance of convincing management is demonstrating that the things you are citing as perceived problems become actual problems.
To try to take advantage of this, try to keep the "complainer" tone down to a minimum, that is, focus on how this affects the bottom line rather than how it makes you feel. Point out possible consequences of poor decisions that you see being made. If those consequences come to pass, and they cost more than an up-front fix would have, gently remind management that you foresaw the difficulty and provide a helpful suggestion as to how future similar costs can be avoided with a little up-front effort.
The problem is, in many organizations, the problems will never cause enough of a problem for management to care, or if they do, they won't see the connection between your perception of the problem and the actual problem the way it occurs. In these cases, you end up seemin like a needlessly persnickety technical person, which isn't a reputation you want to have.
So my advice is, pick your battles. If there is something very egregious that others are about to let slip, then you can speak up and perhaps be vindicated later. For the little details that just grind away at you, I'm afraid there's not much you can do but put up with it.
Show them their own forgotten code disguised as yours for critique.
Take an old piece of their code they have forgotten about
Pretend you wrote it
Ask them to figure out something with it
Make sure they point out how bad the code is for whatever reason
Add your own items. Brainstorm what should be done since it's your fault.
Let them know you didn't know how to bring it up to offend them, but it's their code.
If they recall that they wrote it, they might catch on..
If you have a good relationship with your manager, you might be able to use this to work yourself into a "Senior" or "Lead" Developer role. You could propose that it would be best if one person on the team takes technical leadership of the code base. It would be your job to review the code of others and ask them to make improvements when you feel it is necessary. If you go this route, just make sure to take it slowly. If you ask for a lot very quickly, then you could end up pissing off all the other developers.

Improving really bad systems

How would you begin improving on a really bad system?
Let me explain what I mean before you recommend creating unit tests and refactoring. I could use those techniques but that would be pointless in this case.
Actually the system is so broken it doesn't do what it needs to do.
For example the system should count how many messages it sends. It mostly works but in some cases it "forgets" to increase the value of the message counter. The problem is that so many other modules with their own workarounds build upon this counter that if I correct the counter the system as a whole would become worse than it is currently. The solution could be to modify all the modules and remove their own corrections, but with 150+ modules that would require so much coordination that I can not afford it.
Even worse, there are some problems that has workarounds not in the system itself, but in people's head. For example the system can not represent more than four related messages in one message group. Some services would require five messages grouped together. The accounting department knows about this limitation and every time they count the messages for these services, they count the message groups and multiply it by 5/4 to get the correct number of the messages. There is absolutely no documentation about these deviations and nobody knows how many such things are present in the system now.
So how would you begin working on improving this system? What strategy would you follow?
A few additional things: I'm a one-men-army working on this so it is not an acceptable answer to hire enough men and redesign/refactor the system. And in a few weeks or months I really should show some visible progression so it is not an option either to do the refactoring myself in a couple of years.
Some technical details: the system is written in Java and PHP but I don't think that really matters. There are two databases behind it, an Oracle and a PostgreSQL one. Besides the flaws mentioned before the code itself is smells too, it is really badly written and documented.
Additional info:
The counter issue is not a synchronization problem. The counter++ statements are added to some modules, and are not added to some other modules. A quick and dirty fix is to add them where they are missing. The long solution is to make it kind of an aspect for the modules that need it, making impossible to forget it later. I have no problems with fixing things like this, but if I would make this change I would break over 10 other modules.
Update:
I accepted Greg D's answer. Even if I like Adam Bellaire's more, it wouldn't help me to know what would be ideal to know. Thanks all for the answers.
Put out the fires. If there are any issues of critical priority, whatever they are, you've got to handle them first. Hack it in if you must, with a smelly codebase it's ok. You know you'll improve it going forward. This is your sales technique targeted at whomever you're reporting to.
Pick some low-hanging fruit. I assume you're relatively new to this particular software and that you were re-tasked to deal with it. Find some apparently easy problems in a related subsystem of the code that shouldn't take more than a day or two to resolve apiece, and fix them. This may involve refactoring, or it may not. The goal is to familiarize yourself with the system and with the style of the original author. You may not get really lucky (One of the two incompetents who worked on my system before me always post-fixed his comments with four punctuation marks instead of one, which made it very easy to distinguish who wrote the particular segment of code.), but you'll develop insight into the author's weaknesses so you know what to look out for. Extensive, tight coupling with global state vs poor understanding of language tools, for example.
Set a big goal. If your experience parallels mine, you'll find yourself in a particular bit of spaghetti code more and more often as you perform the prior step. This is the first knot you need to untangle. With the experience you've gained understanding the component and knowledge about what the original author likely did wrong (and thus, what you need to watch out for), you can start envisioning a better model for this subset of the system. Don't worry if you still have to maintain some messy interfaces to maintain functionality, just take it one step at a time.
Lather, rinse, repeat! :)
Given time, consider adding unit tests for your new model one level underneath your interfaces with the rest of the system. Don't engrave the bad interfaces in code via tests that use them, you'll be changing them in a future iteration.
Addressing the particular issues you mention:
When you run into a situation that users are working around manually, talk with the users about changing it. Verify that they'll accept the change if you provide it before sinking the time into it. If they don't want the change, your job is to maintain the broken behavior.
When you run into a buggy component that multiple other components have worked around, I espouse a parallel component technique. Create a counter that works how the existing one should work. Provide a similar (or, if practical, identical) interface and slide the new component into the codebase. When you touch external components that work around the broken one, try to replace the old component with the new one. Similar interfaces ease porting of the code, and the old component is still around if the new one fails. Don't remove the old component until you can.
What is being asked of you right now? Are you being asked to implement functionality, or fix bugs? Do they even know what they want you to do?
If you don't have the manpower, time, or resources to "fix" the system as a whole, then all you can do is bail water. You're saying you should be able to make some "visible progress" in a few months' time. Well, with the system being as bad as you described, you may actually make the system worse. Under pressure to do something noticeable, you'll simply add code, and make the sysem even more convoluted.
You need to refactor, eventually. There is no way around it. If you can find a way to refactor that is visible to your end users, that would be ideal, even if it takes 6-9 months or a year instead of "a few months." But if you can't, then you have a choice to make:
Refactor, and risk being viewed as "not accomplishing anything" despite your efforts
Don't refactor, accomplish "visible" goals, and make the system more convoluted and more difficult to refactor one day. (Maybe after you find a better job, and hope the next developer to come along can never find out where you live.)
Which one is most beneficial to you personally depends on your company's culture. Will they one day decide to hire more developers, or replace this system completely with some other product?
Conversely, if your efforts to "fix things" actually break other things, will they be understanding about the monstrosity you're being asked to tackle single-handedly?
No easy answers here, sorry. You have to evaluate based on your unique, individual situation.
This is a whole book that will basically say unit test and refactor, but with more practical advice on how to do it
http://ecx.images-amazon.com/images/I/51RCXGPXQ8L._SL500_AA240_.jpg
http://www.amazon.com/Working-Effectively-Legacy-Robert-Martin/dp/0131177052
You open the directory that contains this system with Windows Explorer. Then, press Ctrl-A, and then Shift-Delete. That sounds like an improvement in your case.
Seriously though: that counter sounds like it's got thread-safety issues. I'd put a lock around the increasing functions.
And regarding the rest of the system, you can't do the impossible so try to do the possible. You need to attack your system from two fronts. Take care of the more visibly problematic issues first, so you can show progress. At the same time, you should deal with the more infrastructural problems, so that you have a chance at actually fixing this thing some day.
Good luck, and may the source be with you.
Pick one area that would be of medium difficulty to refactor. Create a skeleton of the original code with only the method signatures of the existing ones; maybe use an Interface even. Then start hacking away. You can even point the "new" methods to the old ones until you get to them.
Then, testing, testing, testing. Since there aren't any unit tests, maybe just use good old fashioned Voice-Activated-Unit Tests (people)? Or write your own tests as you go.
Document your progress as you go in some kind of repository, including frustrations and questions, so that when the next poor schmuck who gets this project won't be where you are :).
Once you get the first part done, move on to the next. The key is to build on top of incremental progress, that's why you shouldn't start with the hardest part first; it'll be too easy to get demoralized.
Joel has a couple of articles on rewriting/refactoring:
http://www.joelonsoftware.com/articles/fog0000000069.html
http://www.joelonsoftware.com/articles/fog0000000348.html
I've been working with a legacy system with the same characteristics for almost three years now, and there are no shortcuts that I'm aware of.
What bothers me most with our legacy system is that I'm not allowed to fix some bugs, since many other functions could break if I fixed them. This calls for ugly workarounds or creating new versions of old functions. Calls to the old functions can then be replaced with the new one at a time (while testing).
I'm not sure what the goal of your task is, but I strongly advise you to touch as little of the code as possible. Only do what you need to do.
You may want to get as much as possible documented by interviewing people. This is a huge task, since you don't know which questions to ask, and people will have forgotten a lot of details.
Other than that: make sure you're getting paid and enough moral support. There will be weeping and gnashing of teeth...
Well you need to start somewhere, and it sounds like there are bugs that need fixing. I would work through those bugs, making quick win refactorings, and writing any unit tests possible along the way. I would also use a tool like SourceMonitor to identify some of the most 'complex' parts of code in the system and see if I could simplify their design in any way. Ultimately, you just have to accept that it will be a slow process, and make small steps towards a better system.
I would try to pick a part of the system that could be extracted and rewritten in isolation fairly quickly. Even if it doesn't do much, you could show progress pretty quickly, and you don't have the problem of interfacing with the legacy code directly.
Hopefully, if you could pick off a few such tasks, they will see you making visible progress, and you could put forward an argument for hiring more people to rewrite the bigger modules. When parts of the system rely on broken behaviour, you don't have much choice but to separate before you fix anything.
Hopefully, you could gradually build a team capable of rewriting the whole lot.
All of this would have to go hand in hand with some decent training, otherwise people's old habits will stick, and your work will get the blame when things don't work as expected.
Good luck!
Deprecate everything that currently exists that has problems, and write new ones that work correctly. Document as much as you can about what will change and put big red flashing signs all over the place pointing to this documentation.
By doing it that way, you can keep your existing bugs (the ones that are being compensated for somewhere else) around without slowing down your progress towards getting an actual working system.

Managing feature creep in GUIs [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Does anyone have any practical suggestions about how to manage feature creep in GUIs?
I'm getting strong pressure from both internal and external sources to add, modify, tweak, etc. I always cringe when someone approaches me with the words "wouldn't it be nice if...?". I can't just turn around and yell "NO" at them, because often they are my superiors or customers.
Instead, I'm looking for suggestions to help explain why it's a bad idea to be constantly adding new features, and in doing so, manage their expectations of the final product.
Have feature requests handled in a formal process, normally through the project manager and whoever analyzed the requirements originally. Its always better to palm those sorts of decisions off to someone that isn't the developer, assuming that whoever is going to do that job is actually capable of it.
If you're freelance then obviously charge for changes to the requirements, and if you're an internal development team, then you could consider inter-department billing to make sure people think about what they want to spend money on.
Finally, expect requirements to change and feature creep to happen. If you code without considering what changes might be requested, or your process and/or deadlines are so inflexible that you can't adjust to this, then you'll find that the project will become a nightmare.
What I do is keep feature ideas on index cards and post the cards somewhere visible. When someone asks, "Could it also do XXX?" I write a new card. This is a better relationship building move than screaming "NO!" :-) It also has the advantage of not losing potentially good ideas. OTOH, I'm under no compulsion to implement it right then. The suggester knows they've been listened to, I know I won't forget, I can get back to work, and we can all get together to make priority decisions at a better time than when my brain is in CodeLand.
All right then, I'll be the voice of agile here. The problem can't be solved at the end of that process, it has to be avoided by managing the project differently.
Aside from a specific methodology, the trick is to put those decisions into the hands of the customers. You have a list of things to be done. When they want to change that list, you ask them which item from the list won't be getting done to accommodate the new item. Or, how much more money they will be giving you to handle it.
Also, you have to do the work in small iterations (a week to a month) so they have chances to readjust in between.
We use SCRUM and its been great. After a couple of iterations all the business-level and process-level items get worked out and you're delivering exactly what they want by the end.
Ideally requests like this should be handled by the person in charge of the Functional Design. Wether you like it or not, changes will happen (from the first letter in the functional design to the last byte of code and beyond) and there will always be requests for extra features. So make sure your design is up for such a dynamic process.
This will probably sound like a very lame solution (and I doubt it's good practice), but I have been struggling with the same issues in the past. The fact that it happened in a very small company (lack of 'layers' in management) made it worse, since I was in charge of development, functional design, technical design and managing my own projects.
What worked for me is to deflect the problem back to the person asking (wether it was a superior or the customer). Hand over the functional design, a prototype printout or whatever describes the current situation and ask them to figure out 'how' and 'where' this mighty new feature should be implemented.
Both the superiors and the customers were then 'forced' to take it back to their own people, discuss it in meetings and whatnot. Usually this means that you don't hear from it a second time. In the cases where it did come around, it was actually a concept that worked.
Your company appears not to be defining requirements clearly before starting a project, and this will only end in tears.
My policy is to get a clear breakdown of all requirements in advance and have all parties know the implications of intruding these requirements.
Progressively Delayed Release times
Increased Bugs
Incomplete features
Staff stress
Staff resignations.
Extra charges for expecting more out of the final product than was agreed upon at the declaration of a price ( and this is REALLY BAD )
If they don't want to adhere to a system that is sustainable and productive, one might want to opt for #5, or threaten with #5.
For managers:
the sooner the product is released to the market (assuming it's shrink-wrap), the sooner the company can make money and the better the cashflow.
don't rule out the new feature outright, but balance it against the value you can derive from doing alternative work; explain the opportunity cost.
For everyone:
if the new features are in-your-face in the UI, start talking about the effect of visual complexity on the usability - and from that, attractiveness - of the product as whole. But I'm sure you're already doing that. I'll try digging out some references...
The best way IMHO is to clearly outline exactly what the cost of implementing the new features will be. "It would be nice if" really starts to dwindle when the user starts to see the cost of such additions.
Disagreeing with the customer about a feature usually gets you nowhere. If you blatently say NO to them they will feel alienated and out of touch with you and your team. The feature probably is a good idea overall, granted you have all the time and money in the world and no technical limitations. In their world being able to see a fiz next to a bar after they click on a snip is a good idea. Of course in our world it means a full table scan, potential security vulnerability, and an all nighter to make sure it's in by the next point release.
If you lay it out for them and explain why it is not a really good idea overall they will usually understand. Don't forget all of the different factors (time/money/cost of adding complexity to the project/risk of slipping deadlines). A reasonable person will understand if you paint the picture clear enough, and you can at least say "I told you so" to an unreasonable person.
You cannot handle just feature creep - you need to organise your whole development process in an proper way.
However from your description it seems that you just code what other people ask for and could not re-organise the process. In this scenario your best way to manage the requests effectively by havign a tracking/ticketing system which would allow you to receive requests from other people, prioritise them, estimate them, agree the implementation schedule and track the time you actually spend working on them.
When you will be able to prove with the real-world figures that 'this small button' would take 2-3 days instead of 5 seconds the customer probably believe it should be you will be in much better position to negotiate.
If you will be able to clearly show that the project go-live date will be delayed by two weeks because of the new features you might see those requests simply vanishing.
You have to remember however that 'feature creep' is not always a negative thing. As application matures and grows your customers priorities are changing as well. Failing to acknowledge that could mean that your finished product will not be what they want.
Try checking if they would accept trading a new feature for an old one from original specification which is not yet implemeted.
I keep a prioritized list of work tasks and my estimates on what will be in build X and how long (roughly speaking) I expect it to take to write tests, implement code and do whatever else is related. I always take their inputs, discuss what they really want/need and insist that we determine where it fits in the grand scheme of things. We talk about the impact to schedule and other tasks.
It keeps the communication line open and clear - there aren't any surprises and the expectations are managed. In the end, it isn't my program - it is the customer's (whoever the customer is) and I want to build them what they want (and need) built.
The key seems to be in the question.
'Managing feature creep'... you do this by implementing a management process that needs to be followed. You can't avoid it (after all, it's frequently the customers requesting it and shouting no at them all the time tends to drive the poor creatures away)... but that doesn't mean it has to be undisciplined. With a procedure in place that entails the person placing the request to give simple things like justification and a preliminary investigation/use-case for the change you start to reduce the number of 'wouldnt it be nice to'. Once you have this in place, your feature creep is managed and you can start prioritising and providing more consistent feedback.
Your users have lots of needs that aren't taken care of. They are suffering. They need attention, and they need you. I think feature creep is something that happens when you don't implement the right features already.
Cultivate a close relationship with your users. Let them know you are always interested in their input. Periodically give them a call and ask how your software is treating them.
Get to know their work habits, standard practices, how they use your software, and how they use their other software. As that information comes in, collect it.
When feature requests come in, your users won't really know what they want. You know what they want, though, because you have expertise and you've been listening. So, work with them to clarify the problem they're having, then use your collected knowledge to generalize the problem as best you can. Write a solution that solves that problem.
On the other hand, "feature creep" is often the response of a software product to an evolving business. If your customer's business is growing, you're fortunate, because they will spend more money on your work. So relax, they'll pay you! They just need to understand that, the bigger a system gets, the harder it is to change, and sometimes a new small feature necessitates a big rewrite, or a whole new user interface, in order to keep everything working smoothly.
You have to be careful to balance a reluctance to avoid feature creep with a tendency to ignore feature requests and feedback.
Every time a user comes to you with feedback, that's an opportunity to improve your product and what you're working on. It may end up that you're adding something interesting to both the user, and your developers; it might actually be fun to work on. And yes, it may be a stupid idea, as posed to you. But it's your job to accept the feedback, extract anything positive from it, and shape it into something valuable to your users, the product, your company, and your development team.
That being said, feature creep is a very difficult thing to manage. And how well you manage it depends on your position and who the "creep" is. If you're a mid-to-junior level developer, and the CEO is demanding a feature; well, you're going to be adding that feature. You can try to convince the CEO that it's not a valuable feature, or it won't work, or there are more important things to be working on, or it will negatively impact the schedule. But never do any of that at the time the feature is being requested. All you'll end up with is two people defending their position instead of working together towards a common goal.
Instead, accept the feedback and feature request (or feature demand) at face value immediately. Walk away, think about it openly for a while by yourself. "Could this be valuable?" "Am I missing something in the way the CEO asked for this?" "Is it as hard as I'm making it out to be?" Ask yourself these kinds of questions, and come up with some concrete answers. Then always go back to the CEO with follow-up questions. Demonstrate that you've thought about the feature requested, and have actually come up with some ideas, tweaks, enhancements, or objections, etc. This will create an open discussion. One that the CEO hadn't anticipated, but that he most likely would not object to since it was not outright resistance to his idea initially.
One of our financial backers requests features all the time. Sometimes he says can we get the software to do 'x'. If it is possible we tell him yes, and then ask him what timescales he had in mind. If he comes back with ASAP - then we tell him that some other feature will have to give, or extend our deadlines. Thankfully he then normally changes his opinion to sometime in the future.
I think the most important thing is to actually record the idea or request, even if the feature doesnt get implemented straight away.
We use Bugzilla to keep a track of bugs - but also Feature Requests. We have a 'features' worklist (or target version)... that way everyone can see what features we would like to develop in the future and as people have more ideas on a feature they can simple add more to the item in bugzilla.
Every release when we sit down and work out the worklist's for a version we dip our toes into the features list to see if there is anything we can pull in. We do try to pull in a feature when we can and give feedback to people - this shows that features and ideas are not falling on deaf ears.
This feedback in helps people know that we are acknowleding their features requests and we DO get round to implementing them, rather than them just sitting on a list which gets bigger and bigger.
1)Increased time before release.
2)Increased cost.
3)Exponential maintenance cost
4)Increased potential for bugs
In order to manage a feature request, ask them to submit a change order. Periodically, review the change orders and send back a statement about each request, "This will take X long to do, implying this Y additional cost. Is this acceptable?" Once the requester has accepted the additional cost, then that's a-ok. Your hands are washed. :)
You explain "sure, it's doable, would you like to have an estimate of how much it will push out the project completion date? Also, giving you that estimate will add about a day to the project end, as well."
There's nothing wrong with adding features, so long as the stakeholders understand that there is a cost associated with doing so.
Suppose you build a product that has exactly one feature and all 100 of your customers love your product and find it easy to use. Now suppose that you add ten more features to your product that only 10 of your customers will use. Now you will find that 90% of your customers have much more trouble using your product because there are ten times more choices to make, and ten times as many things you could do that won't help you. The good stuff has been lost in the noise.
This is of course a simplification but the reality is that most of your users will only use a small portion of the features of your product.
Read some good books on software design and UI design, and get your manager to read them too. Joel Spolsky's books are a good place to start - www.joelonsoftware.com
If there's a lesson we can learn from the Internet and Web 2.0 kinds of things, it's that people love customization. That's what iGoogle and hundreds of other sites are all about. If you can build customization in to your GUI, chances are your customers will love you for it.
Also, take a look at how other projects successfully manage feature creep. For example, Google lets users submit feature requests, but also shows a list of features already requested. Users can then vote to request that feature as well. Not that I'm a suck up, but take a look at stackoverflow.uservoice.com. They have a similar policy.
It's critical to listen to your users and get their feedback. Expect them to come up with new ideas that are better than yours. Expect them to come up with ideas that you think are dumb. If enough people want it, and it seems reasonable, give them what they want.
Create work mandates that define the problem that needs solving. Your work is constrained by only needing to implement that which is necessary to solve the problem.
Any further refinement of the problem then becomes change control.
We follow wireframes in my office. Any change after signoff has to go through a Change Control procedure.
Lock-in feature set for a short time frame (Scrum/iteration/agile). As the user starts seeing things working, the necessity or lack thereof of features will become more apparent.
Also, it is helpful to have a person through which all changes come (in Scrum, a really good Product Owner).
Show them how simple GUIs can be effective. Examples: Google Chrome, Apple's software. You may also want to show examples of bloated software, like Eclipse, Netbeans, Visual Studio... ok, these are actually all software IDEs, but they all have cluttered interface.
The trick is to define the project as a sequence of versions. Your initial design is for version 2.0, but, the intended first release is version 1.0. All new ideas (features) are welcomed but since, due to scheduling, version 1.0 is frozen, the new ideas have to go into version 2.0.
Of course as soon as version 1.0 is released you begin bug fixing and coding for a maintenance release of version 1.01 and so on... Perhaps version 2.0 never actually
gets released, but is used as an elusive goal and a parking place for features
that are good, but not good enough to delay the release of a working version.
The right question to ask is 'How can I give the developers a stable environment, while still responding only to high benefit feature requests.' A SCRUM like approach would be:
Stable environment:
Have the developers work on a small fixed set of features during a small fixed iteration interval.
Responding only to high benefit feature requests:
One person maintains a list of prioritized features. New features can always be added (Cuts down a lot of politics). However the features selected for the next iteration only are the high priority items.
Communication is key. In a relationship with a client, it must be clear to them that when a plan is created with a set of features, that is the set of features. It is only the fault of those interacting with the client who are either misleading the client or are somehow intimidated by the client.
As for developers contributing to feature creep, the key is to find a balance between making decisions on implementation and outright adding new features. Again, communicating with the developer on a regular basis will likely curb an issue here.
It might not be possible to avoid all feature requests.
But try assigning a cost for each feature request. When the next planning meeting or deciding the features for the next release comes around this will help to weed out the unnecessary ones.
IF you're not the manager or owner of the project, I prescribe the following:
If they want it, do it. Make sure they pay you on payday. I've learned that sometimes the battle to get things to conform to what you would like, isn't worth fighting. Enjoy life, after work and plan & code your own personal projects that do things the right way.
The answer to your question is broader than just GUIs. Feature/Scope creep will always happen, when someone isn't paying attention to what the contract has stipulated and when there isn't a formal process for handling change requests.
If you lack the ability to implement the formal process or influence its creation, I suggest you get all feature change requests documented in email, and that you notify your management of the possible consequences in email. This isn't to get anyone, but rather to protect yourself from the fallout of the eventual failure.
At some point, you have to ship something. Assuming you're going through some sort of a formal test process, as long as the product continues to change, testing is never going to be able to sign off on a working product.
It helps to come up with a timeline describing what features will be released and when. That way the people pushing for the new features have some idea that their requests will be handled. It doesn't mean they're going to be handled right now, but it should provide them some reassurances that the next version will address their concerns.

Resources