What is the most esoteric internal feature you have found or read about? - internals

For me it is the security cookie created for each process to prevent buffer-overflow attacks.
Tracking its creation with a debugger, it's created by xoring the Thread ID, the Process ID, the PerformanceCount the TickCount and more... And then if by any chance the most word is zero, then the least word is copied into it...

Leap years.
Ha ha, I know, it's no secret issue. But it's also a rather amazingly common "gotcha" and painful set of exceptions to any date codes. Most date libraries would be trivial, a dozen lines of code, except for leap year support which is essential, yet rare, and much more complex than it initially seems.
Remember also that leap years are not "every 4 years" at all. There's a complex set of rules. And even worse since you have to have historical exceptions.. and you have to decide on Gregorian versus Julian calendars.

Related

What is the danger in simply increasing the Connection Pool?

I'm doing some performance testing against an Apache server and getting the dreaded message "the context pool has been exhausted! Dun Dun Dun." (dramatic emphasis added)
The current proposal floating around by the devs is to simply increased the connection pool-size. While this may be valid, little bells are going off in my head saying "well, that's too simple - surely there are downsides to this" and I ask the community what some of those may be.
I would like to keep this as generic as possible so that it might be the most use to the most people.
Warning: nostalgia post.
Many years ago, I worked in a mainframe environment. My job was to run ad-hoc SQL reports to evaluate the effectiveness of the Christmas sales campaigns. "How is line X doing, against line Y, over the last Z weeks, against the same weeks last year"... and similar.
It was exciting: ad-hoc queries, directly against the live database, marketing decisions being made for the afternoon, based on the morning's questions from the buying department. No SQL errors permitted, no testing, no development environment, it was right first time, or nothing.
I left for lunch, mad fool that I was, having crafted a multi-table join that would give the desired results, albeit after a 30 minute query, secure in the knowledge that no locks were being taken to prevent the transactional ordering system saving new orders. It was mid December, at the height of the Christmas ordering season.
I returned, the company in chaos. The whole ordering system was down, everything was timing out, no one new why.
Except that CICS couldn't get connections. The pool was exhausted, so new transactions just failed.
It was my report. It took a ton of CPU. It ran a long time. Everything else ran just that little bit longer. So the connection pool ran out of connections, and everything else died for a while.
My query completed. It gave the right results. The buying department was happy, since they could make the right decisions for the next day. Orders started getting saved again, and guess what...
I had been horrified and embarrassed, but amazingly, I was temporarily a local hero, for providing the crucial report just in time. The half-hour crisis was a catastrophe at the time, but a day later, just a memory.
If there's a moral here, it's this:
There's no single right answer to this sort of question.
If you can, find out why you're running out of connections, since it might be an indication of something serious.
But it might not, and tomorrow, there might be a different problem.

What should the penalty/response for missing a deadline be? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Being relatively new to the software industry I have come across a question of deadline enforcement:
Back in the idyllic age of academia, the deadline was the end of the semester and the penalty was a well defined 'F' (or local equivalent). Out here in the real world we need to make code our current and future peers can work with, I face the situation where deadline comes, deadline goes, and the project is still not finished.
Now what? On one extreme we could fire everyone involved, on the other we could richly reward everyone involved.
What actions have you seen applied as 'penalty' for missed deadline, and which of these eventually resulted in more-good-code?
What project-management responses caused the project to fail outright,
What responses restored working order and resulted in code that could be maintained afterward?
What responses resulted in more-bad-code?
Deadlines are part of a fundamentally wrong idea about how to do software development. People new to, or outside of, the software development industry do not understand this:
Software is done when it is done, no sooner and no later.
If a developer has a task and a week to do it, and it looks like it will take more than a week, nothing can be done to change that. No matter how much harder the developer works, no matter how many people are added to the task, it will still take as long as it takes (in fact adding people usually makes it take longer).
Instead, read up on the agile development process. Software should be developed iteratively, and each iteration should be based on the results of the previous iteration, not on external requirements imposed.
Edit based on the extensive comments below:
I would never argue that developers cannot be held to some kind of delivery expectation. My point is in response to the specific hypothesis which the asker posed - that the nature of software development in business is somehow analogous to schoolwork, or any other kind of work for that matter. I contend that it absolutely is not. "Deadline" implies much more than a simple delivery date. It is a fixed point by which a fixed amount of work must be completed. Software simply does not work that way. I wrote a few more paragraphs explaining why, but honestly if you don't already believe that, nothing I say is going to convince you.
If you are working on a software project and it is clear you will not be able to reach your deadline, what can you do to rectify that? The answer is well-known by now: practically nothing. You can't add more people. You can't "work faster". It's just not going to get done on time. You tell the stakeholders, everyone adjusts, and keep working (or not). What, then, did the original date mean?
Anyone who claims software development is analogous to bridge-building or homework or that impending deadlines can still be met if the developers would just get their shit together and work their asses off, are deeply confused about their own profession.
Your first reaction should not be what to do in response to the missed deadline, but to analyse why you missed the deadline. The response to missing the deadline would then follow naturally from that as a consequence of the reason.
For instance, if everyone involved didn't do their job, fire them.
But if they did their job, and more, then why was it still missed? Too much other activities done by the same people? Too big a scope for the deadline (ie. unrealistic deadline). Or ... etc.
The top reason for missing a deadline in my experience is that people aren't allowed to work 100% on the project at hand, and thus any estimates you might have, although accurate on their own, aren't really useful at all. That, plus unrealistic estimates and deadlines.
Developers should never be penalized for Management's mistakes.
It's like a parent punishing a child because the parent had a bad day.
Reasoning:
Deadlines are a fact of life. People want to know how long something will take. The best we can do is estimate/guess. It is the role of management to try to figure this magical, never correct guess. When they create a deadline, they need to use the right tools (experience, ASKING FOR HELP FROM DEVELOPERS, lawyers, hr, etc)
However....
The penalty for missing a deadline should not fall back on the workers. It is the management's fault for missing deadlines. They should have said no, should have scaled back the project or should have motivated the workers better.
In a construction crew, if you piss of the workers, you start a fight. In my company, if we miss deadlines, the management gets in trouble. Not the workers. It's the manager's job to control the project and what is done. The workers are only doing what they can. The manager's are in charge of assigning roles and tasks.
I'm not saying the quality of workers isn't a factor, but the management should KNOW that! It doesn't take a genius to know that a project isn't well thought through or nicely controlled. Ask anybody if their manager has any idea what's going on and you'll find the problem.
We stopped missing as many deadlines when the managers realized it was their fault for setting/agreeing to the deadlines.
</rant>
Re: The questions:
1.What actions have you seen applied as 'penalty' for missed deadline, and which of these actually made things 'better'?
Manager has less responsibility. This person does not get promoted or publicly thanked. Most likely this person will be moved to a "less-critical' project.
2.What project-management responses caused the project to fail outright, and what responses restored working order and resulted in code that could be maintained afterward?
feature creep: manager keeps adding more stuff in the list. <- fight this off with a List of tasks ordered by priority. When you add things to the list, compare their priority with the things around it. Make new things harder to be set as "top priority."
too many bugs in the code: Manager need to require tests (atleast critical) and automation. Builds need to be standard and automatic. Real users need to see the code before it is "finished."
un-readable code: Institute peer code reviews. If someone has dirty code, ask someone to "help" them with a project.
If you have the salesman problem, where the salesman promises features that doesn't exist/work: Management needs to step in and explain the problem to that salesman. Also, not giving that salesman public affirmation for a job well done sometimes helps this.
Rather than a penalty, how about realistic estimates and rewarding on-time releases?
Inspired by the comments to my response
Maybe the question should be "How do I make realistic estimates?" For me, I use FogBugz estimation history and completion date plots. These give me data points of how long I estimated a task to take and how long it actually took. This has helped guide me to give realistic release dates in the long-run (it didn't happen overnight). I find estimating timelines to be an interative process: I
design
estimate
develop
find an shortfall in the design & iterate.
Death. Clean and simple.
Depends on whether developers set deadlines on each modification request, or whether these are set for them by management.
In the latter case, unless all your developers are sitting and playing Halo 3 all day, a missed deadline is often an indication of a mistake on the side of management or the team leads. So firing everyone wouldn't solve the problem. It might make sense to introduce better indicators into your software process so you could see that the deadline would be missed long before it happens.
If your developers do give time estimates, then I would be very careful about rewarding and penalizing developers for meeting deadlines or missing them. The result of doing this could be that they would adjust their "fudge factor" in time estimation. They would give themselves too much extra time (to reap the rewards), which messes things up if they are good at estimation. Your goal should be to get them to give good and reliable estimates, not to change the way they work to meet these estimates.
It depends on if the deadline was possible in the first place, maybe it was a fault with the planning and estimation of how long it would have taken. Make sure you know why the deadline was missed before deciding punishment
Oh, man...
First of all, there are external deadlines and internal deadlines, and they should be different.
What happens with an internal deadline is the frequency of activity increases as the deadline approaches, reaches a peak at the deadline, and then falls off as the deadline recedes. So plan the external deadline to follow the internal deadline by a couple weeks at least.
Then, make sure the deadlines are realistic. Partly you do that by involving the developers in setting them, and in deciding what will be accomplished.
Finally, I've mostly been a developer, but once when I took a stab at management, I would never want to take the latest-and-greatest version into a conference or presentation. I would want to take a version that was at least a few weeks old and that I knew where the problems were and that I could be sure would not contain unpleasant surprises.
In his wonderful book about project management - "Deadline" - Tom DeMarco tells us a story, about project manager from a western-world is managing a project in some fictional post-communist eastern European wild country (wild is a really good term, because the citizens are a bit.. uncivilized).
One day PM discovers, that something went wrong, some part of his project dramatically missed the unrealistic schedule. Previous PM established penalty for missing deadline simply by hanging responsible person on a butcher’s hook, but as schedules were unrealistic, one man already missed deadline.
So the story tells us about a day, when western-style PM is presented with a responsible person, and he should send him to be hanged on butcher’s hook. PM, as most people do, is terrified of vision of sentencing someone to cruel death simply because some was never able to finish his project in time. And – by all means – hanging this poor man does not advance the project. Since this is a fiction novel about project management, and not about tortures, our hero cancels the penalty.
But there is some big issue behind this story about hanging someone: if you set a deadline, and establish some kind of penalty for missing this deadline, the day will come, you will probably have to actually punish someone. And will you do it? No matter what the punishment will be: hanging, bonus loss, firing, breaking the deal or some fee – you may have to punish someone. Will this penalty do some good to your project? You have to answer it by yourself.
So: do not establish a penalty for missing the deadline, you will not want to execute…
As others have mentioned, before talking about penalties, start with "how do we determine whether these deadlines are realistic"?
Or as my boss once said, "We'll be happy to work a plan when you give us a plan that works".
I still think that should be on a t-shirt.
Once you've reached the point at which people have blown the deadline, you have to ask yourself (A) what the natural consequences of that are and (B) how you can best complete the task and maintain some kind of movement towards the business objectives (even if you're not running a business).
Explicitly penalizing people for blowing the deadline is unlikely to help unless they believe that they've earned it. This will not happen if the deadline was unrealistic, if there were elements of the team that were the primary points of failure, if there were serious problems with requirements, or if the majority of the team involved believes that the above factors are true.
In one case I was on a team that blew a deadline on a small deliverable by over three months - and the original deliverable due date was three months from start! We misunderstood the requirements, didn't sufficiently talk to the customer, and underestimated the time involved. Management was not at all interested in assigning blame. This was partially because it would have been counterproductive to finishing the deliverable, partially because none of us were "problem employees", and partially because management knew that we were all highly-motivated to fix the problem and satisfy the customer. So we got it done, the customer was as happy as could be expected, and we moved on with our lives, with some valuable lessons on how to avoid the situation in the future.
So far in my career I haven't seen any real penalties for missing a deadline (and I've missed plenty). I imagine it's different for companies building software or games to be sold in stores where the company has made promises to the public.
But in the custom software development realm, it's so hard to accurately estimate how long a project is going to take. And often times this fact is reluctantly accepted by companies everywhere.
No penalty. "Deadlines" and estimating have been and continue to be one of the hardest and most challenging parts of software development.
It is ridiculous to impose penalties on developers for this issue.
While I have never seen any disciplinary action or firings I have seen lots of "mandatory" overtime and peer pressure to work longer hours.
I almost got fired as a manager for telling the team that reported to me NOT to come in on weekends and work late. I know those things are detrimental to the project and to morale.
Generally the "punishment" is in the form of making people feel guilty or anxious, but I am sure there are places that do more "official" things.
The world is full of idiots. Management is no exception.
I think the question it self demonstrates a misunderstanding of the role of management and project management.
There is, unfortunately, a common perception in the minds of many with the word Manager in their title that management means putting the heal to/kicking the butts of lazy workers. It fits with those that believe in Parkinson's Law as well.
It's not. It's about making it possible for works to do their jobs - be it being the communication channel between them and some other part of the organisation, getting them resources, or running interference (moving the furniture out of the way).
To wit, the PM should already know the project/task is going to miss it's deadline. They should be asking questions, and know what's going on. They have the power to either cut tasks or increase/rebalance the resources to get the job done (or say to the sponsor, if you don't give the resources, it ain't getting done on time). And as such, the penalty goes to the PM, whether it is nothing, tongue lashing, demotion, or termination.
Sometimes the delay is unavoidable. This is why we build in contingency time. Sometimes, it's a known risk; and so long as you have a backup plan - you are OK.
As for the responses, you have four parameters: Scope, Time, Money, and Quality
Scope - you can cut to make the deadline.
Time - is fixed. You might be able to get your staff pull a week or two at 60hrs, but your productivity will begin to suffer after that. And it also costs more money if you are paying them fairly.
Money - You can buy pieces from someone else to speed up the process. You could even hire more people, if the work is disjointed enough that you don't have to have a lot of communication with the existing staff - see Brook's Law
Quality - Idealistic fools claim you can never skimp on quality. But you can. You don't have add bugs (one form of anti-quality); but you can put less quality in. Do you code your function so it can handle unlimited length strings, or is 100 characters good enough for this version? Do you make it easy for the next upgrade to bolt on a new module, or do you weld it shut and worry about adding a plug-in module when you do the next version.
Not doing these things aggressively enough (when required) will surely lead you to a failure.
It's certainly not a cut-and-dry answer. Here are some things which I weigh and things I do/encourage in order to make sure things get done on time.
1.) Set priorities properly. Projects will always have various degrees of completion. It's not a binary "done"/"not done" switch. If the highest priority things are done first, it's easier to swallow. Ideally, you should quickly get to the point where it works, but it doesn't do everything we need it to do and it doesn't look pretty. Once there, it can be released if it absolutely needs to.
2.) I've found that the best way to handle it is to make the releases as small as possible. This makes the estimates more accurate. If your boss or "the market" dictates that your estimate is unacceptable, consider assigning more developers to this task if possible. Sometimes a task can't really be divided up easily, or there's only one person familiar with the code. If it's not a high priority just tell the powers that be that it's going to take longer. Setting reasonable goals and managing expectations is key.
3.) As for motivation, rewards, and punishment... there are many doctors who have written entire books on these subjects. In my experience, giving programmers something which is challenging and letting them have some freedom to do it their way is a good start. Listening is something managers need to do well in order to succeed. If the developer is seasoned, you should be able to just explain the problem and let the developer come up with the solution. If their solution isn't as good as what you had in mind, you can suggest it and go from there. Just dictating how to do something, even for new programmers, is seldom effective. Making the developers think about things will help them be able to solve problems on their own. This is related to delegating, as that only works if the developers can do the work on their own.
4.) Reduce turnover by paying people well if they're doing well. It usually costs much more to find good people. It takes time to get familiar with a large code base and the hiring process can also help avoid spending time on people who can't cut the mustard.
5.) Ask (don't demand) if a developer can stay late/work weekends. Only do this when it's something very critical (for example a security flaw which gives user access to data they ought not be able to access; a new law/regulation passes which you must comply; etc.). If they say no, don't hold it against them. It's likely not their fault that things didn't get done; and even if it is, it's reasonable that they made plans for time when they aren't expected to be at work. If they are willing to come in, make sure they know your sincere appreciation. Compensate them well for helping out when they aren't obligated to, buying lunch doesn't cost much and it's a very nice gesture. Don't make a habit of expecting people to work late/weekends unless it's part of their contact/agreement (or if they like doing so).
6.) Understand why things are running behind schedule. Did you promise something which wasn't possible (given the people available, quality expected and time allotted)? Did some other project come up and take up resources and the deadline wasn't adjusted? Was the code just harder to do than expected? Giving time estimates is difficult. You need to plan everything out, have experience and know how long each developer will take for the task. Compensate for unexpected problems which will likely arise and give the programmer a sooner deadline than your boss or the client. It's always OK to be done early. And if you're almost always done early or on time, that one time that you missed your deadline will be more understandable if you have an explanation of some sort.
7.) Remember, it usually boils down to time, quality and money. You can generally pick any two, but the third one will need to balance the equation. So if it needs to be done quickly and on a shoestring budget, you can expect the quality to suffer. If you need it done quickly and of high quality, expect to pay a lot of money, and so on.
8.) I'd say the #1 thing which works for me is listening. If your too busy barking orders then you might not even know about problems with the company. Now just because a developer says "the code sucks, the design is terrible and we need to re-write everything if we want to get anything done in a timely manner" doesn't mean that it'll happen. But if you hear comments like that and explain that we can't afford to do this or we'll get killed in the marketplace, it'd be way too expensive. And ask what can be done to make sure things don't get much/any worse. Ask if there's a way we can clean it up over time. Can we just (re-)write one class and build new stuff based on that? Can we slowly migrate to a new design one feature/segment/module at a time? You understand where they are coming from and vice versa, you can probably solve at least some of the issues. Just remember that compromising works both ways.
9.) Negative re-enforcement seems to result in higher turnover, which is costly. Having a bunch of people who aren't familiar with your code doesn't help deadlines either. Money is one motivator, but I've quit a higher paying job to go to one where I'm happier before, and I know I'm not alone there. Free food when the team does a good job isn't really that expensive. I'm not too keen on group activities since they're either cutting into an employees time, or taking away from work time. It works sometimes, but cutting into an employees personal time so they can hang out with co-workers instead of being with their friends isn't that great of a reward. Having everyone stop working is also expensive... so it just depends on the company size, culture, etc.
Hopefully that helps answer your question. The other answers in this thread are also good suggestions... design plays a big part in how quickly code will be written.
Once a project is late, there is not much 'management' (good, bad, well meaning or malicious) can do, that will not result in the project being even later
... the only exception possibly being the removal/avoidance of exterior distractions.
If you're missing your deadlines, fix your estimates.
Taken from a corporate development standpoint...
If the deadline came from someone other than the person performing the work, review the situation to determine the cause of the overrun. In these cases, it is often related to incomplete requirements, scope creep, poor management, etc. No punishment should be given for missing a deadline that the person never provided in the first place.
If the deadline was provided or agreed upon by the individual performing the work, then that person needs to explain the factors that led to the delay. In addition, this person should be reminded to notify their supervisor, project manager, or other responsible party as soon as they are aware that a deadline may be missed. This information should not come to light after the deadline has passed. If this occurs repeatedly, your company's disciplinary process should be followed. This may involve write-ups, suspensions, or termination.
People tend to take real ownership of deadlines when they are the ones setting them. When deadlines are placed on them without their input, deadlines tend to become meaningless to the person performing the work.
Your question is inherently flawed: it assumes that punishment is the best way to manage people. In general, that people don't respond well to punishment or threats of punishment; it brings out the worst behaviors, make the motivation external, and distracts from internal motivation. Rewards and bribes (threats of reward) are the other side of the same coin, and do no better.
These forces are built in to work for hire, however, so you'll never get the best creative work out of your programmers, but you don't have to make it worse by punishing them when they miss a deadline.
Instead, meditate on the creative process, the chaos of multiple people doing creative work, and what tools are effective in managing chaos.
To manage any chaotic system, do lots of measurement and be ready to change course quickly. In the case of programming:
Take the smallest steps possible. Don't "break the task in to small steps", as you'll waste a lot time planning steps that won't work out like you planned. Chaos, remember?
Pick the smallest steps that deliver the most value.
after a short period, reevaluate your plan based on what you've learned
deliver working software to actual, real customers as soon as possible, so they can tell you what you should really be doing.
You may recognize this as the thinking behind SCRUM.
Flogging
There are two possibilities:
The deadline was missed because someone didn't do their job.
The deadline was unrealistic.
Rather than thinking in terms of penalties, I would suggest doing a post-mortem to determine what went wrong and finding ways to improve the next deadline estimate.
You ask "what should the penalty be...". It would appear you are asking from the perspective of "inside the company".
In real life, the penalties are often swift and severe - loss of business, lawsuits, bad reputation in the industry. These are the REAL penalties imposed by clients who were promised something by a certain date that was not fulfilled.
Internally, you can often do whatever you like. But once you start involving paying clients, then managing those clients becomes a critical part of the overall job.
Penalties such as I described can often be avoided (or lessened) by "on top" communication with the client. If the client wants something added (so-called feature creep), then this should immediately be answered with the impact these changes will have on the project (costs more, delivered later, whatever). The client should be encouraged to triage all such requests against their deadlines and projected costs (i.e. let the client manage feature creep, not you).
If other things change the delivery time, then as soon as you know there will be slippage, you must inform the client. If done early, clients are remarkably willing to work with you. But if you don't say anything until it's too late, they are less likely to forgive... especially should they discover you knew a significant time earlier and didn't tell them.
Cheers,
-Richard
I've seen executives leave a company shortly after some deadlines were missed. This changed everything but didn't necessarily make things better or worse. I've seen some contractual obligations like clawbacks as a way to penalize someone for missing a deadline that I'm not sure how well they work.
When one completely changes what a project is supposed to do midway through the alloted time for the project that tends to cause the initial trajectory to no longer be valid and thus the project will fail because it likely will not meet the initial deadlines within budgets. Replanning the project into short increments of at most a few months is a response that I believe is a logical direction to take a project to get good results as a lot of project may have to accomodate changing requirements which can easily change deadlines, head count or time worked.
What should the penalty be for setting an unrealistically short development timeframe against all of the advice of the developers and their leads?
Coincidentally, this seems to happen almost as often as development teams missing ship dates.
This is not really a programming question, but more of the management question.
Missed deadlines are rarely developer's fault. As a developer you should try your best to do as good work as you can, but in the end everyone is capable of only so much. If developers put in honest effort and despite this the deadline was missed, it means that the deadline was unrealistic to begin with.
Dealing with deadlines is responsibility of managers. There are different approaches but none of them include "penalizing" developers for doing their job. An important thing to understand here is the so-called project management triangle. What it means is that software project can be good (i.e. meeting requirements, good quality), fast (meeting deadlines) and cheap (headcounts, tools). The trouble is that only 2 out of these 3 properties can be chosen.
So if management want something good and fast - it is not going to be cheap.
If management want something good and cheap - it won't be fast.
And finally if management want cheap and fast - guess what, it won't be any good.
So the correct response to missed deadline depends on the chosen scenario. Good and fast requires adding some extra help, better tools, investment in above-average developers and more.
Good and cheap by definition assumes that deadlines are going to be missed (Blizzard, makers of World Of Warcraft are good example of this approach)
And finally cheap and fast usually means cutting features and releasing with bugs.
The main goal of project management is to plan how an application is going to be built, in time. You should not start your project development if you don't have a schedule showing what you're going to be doing every single day the project will last.
This way, you can detect that you're going to be late, as long as you follow the project's evolution on a regular (weekly if not daily) basis. And the earlier you know it, the sooner you can act accordingly.
You usually have two options :
Catch up (by hiring additionnal workers, working more, or removing features).
Tell your customer that something went wrong (even better : what went wrong) and you're gonna need more time.
For the second option, I'm not meaning there won't ever be penalties. But from my personal experience, as long as the customer is informed in advance and offered solutions (preferably three : give more money for additional workers/remove features to save some time/accept the project being late), they'll be open to negociation. Which is always better than conflicts :)
Perhaps the better question is if deadlines are meaningful in the face of inaccurate estimates? Businesses do a lousy job of estimating software--that is a fact. Both management and developers play a part in this and neither one seems willing to own-up to their responsibility in this problem.
But to answer your specific questions:
1.What actions have you seen applied as 'penalty' for missed deadline, and
which of these eventually resulted in
more-good-code?
The 'penalty' I've seen for missed deadlines for managers and developers range from nothing, to promotion, to simple transfer. The most severe penalties I've personally witnessed was a manager "transferred" to a less important project and for the business-unit to lose a financial bonus.
The only time I have ever seen someone fired over a missed deadline was when the employee was already going to be fired--the deadline gave the business a legal reason to fire the employee.
2.What project-management responses caused the project to fail outright?
This is a whole separate discussion on its own... but there is some inherent bias in this question--project management is at fault.
The three top things I have personally seen PM's do that sabotage a project are (in order of severity):
Ignore data/recommendations/warnings from their technical staff.
Ask for estimates early in the development process. This results in estimates with an error-bar of 10x (it'll take one month, give or take ten months).
Reject/modify/demand software estimates so that they fit an arbitrary budget and schedule. This is not to say Developers should ignore business demands--but rather the business demands need to be set equally by Developers and non-Developers.
3.What responses restored working order and resulted in code that could
be maintained afterward?
I have yet to see a functional software development organization. So the fix is usually a lot of blood, sweat, and tears from a couple of heroic developers working with a highly-capable PM who knows how to defend against politics within the company (i.e. deflect BS from their staff).
4.What responses resulted in more-bad-code?
Yelling. Cursing. Insults. (Sadly, this still happens in some workplaces)
More "project management"--either by way of people, meetings, status reports.
Getting software estimates earlier in the process so "we can plan better." Estimates need to come later when your staff has more data and a better understanding of the problem.
Coddling the developers (it's not your fault, the manager screwed up).
Coddling the project managers (it's not your fault, the developers screwed up).
Adding additional, unqualified staff to the project.
I got fired for missing a deadline, I was 98% finished with the product, external forces and deadlines that are that firm don't allow software to be developed properly. Even I can admit I wrote some poor code under the circumstances, but I also wrote some good maintainable code as well. No consideration was given for feature creep, infact no technical specifications were detailed upfront and adaptation of functionality was required as limited and buggy versions of the software became available for managements review. I could have communicated better, but when I did communicate it was emphasized that deadlines are non-negotiable.
Two obvious questions come to mind when a deadline was missed:
Was the deadline feasible?
Did external factors impact performance?
Obviously, if someone presents you with a deadline that doesn't make sense then there shouldn't be any penalty for missing the deadline. Also, if someone misses a deadline because they were called up for jury duty that also shouldn't be held against them as well.
In the event those questions don't apply then the next thing to do is to figure out what went wrong. If you based your estimate for how long something would take, and thus the deadline, on the developers estimation of how long it would take them to write the code then perhaps they were too optimistic in their responses.

Software Rewrite-vs-Running Cost Analysis

The IT department I work in as a programmer revolves around a 30+ year old code base (Fortran and C). The code is in a poor condition partially as a result of 30+ years of ad-hoc poorly thought out changes but I also suspect a lot of it has to do with the capabilities of the programmers who made the changes (and who incidentally are still around).
The business that depends on the software operates 363 days a year and 20 hours a day. Unfortunately there are numerous outages. This is the first place I have worked where there are developers on call to apply operational code fixes to production systems. When I was first, there was actually a copy of the source code and development tools on the production servers so that on the fly changes could be applied; thankfully that practice has now been stopped.
I have hinted a couple of times to management that the costs of the downtime, having developers on call, extra operational staff, unsatisifed customers etc. are costing the business a lot more in the medium, and possibly even short term, than it would to launch a whole hearted effort to re-write/refactor/replace the whole thing (the code base is about 300k lines).
Ideally they'd be some external consultancy that could come in and run the rule over the quality of the code and the costs involved to keep it running vs rewrite/refactor/replace it. The question I have is how should a business go about doing that kind of cost analysis on software AND be able to have confidence in that analysis? The first IT consultants down the street may claim to be able to do the analysis but how could management be made to feel comfortable with it over what they are being told by internal staff?
We recently decided to completely rewrite large portions of our business code from scratch, and it has not gone as well as we had hoped. I've seen a lot of quotes saying you should never try to rewrite anything from scratch, and now I see why. I would recommend starting small - don't try to rewrite the whole thing at once. Identify the large problem areas and focus on refactoring small portions of the system at a time. Since there is 30+ years worth of work in the system, it will take a long time to get it back to a reasonable state. We had about 5-8 years worth of work to rewrite, and it has been difficult. I can't imagine 30+ years of work!
First, the profile of the consultant you need is very specific. Unless you can find someone who worked in a similar domain with the same languages, don't hire him.
Second, there's a 99% probability (I like dramatic numbers) the analysis will go as follow:
Consultant explores the application
Consultant does understand 10% of the application
Time's up, time for the report
Consultant advices a complete rewrite (no refactoring, plain rewrite)
So you may as well make the economy of what the consultant will cost.
You have only two solutions here:
Keep with the actual source code but determine proper methods to fix problems so that you have a very long run refactoring that is progressly made by those who know the application
Get a secondary team to make a new application to replace the old one
If I talk about a secondary team, it's because you cannot bring just one architect to make the new application and have the old team working with him:
They're too busy on the old application
There will be frictions because the newcomer will undoubtedly underestimate the task at hand
I talk from experience, believe me.
If you go the "new application" way don't put your hopes too high. You'll end up with an application that has less than half the functionalities of the current one, simply because you cannot cram 30+ years of special case and exceptional situation fixes into a freshly design software.
Oh, also, if your developers happen to tell you they have a plan, by all means, hear them out. They most probably know what they are talking about.
The first thing that comes to mind is that you are prematurely addressing the rewrite/refactor/replace argument. The first step two steps I would recommend would be:
Unit tests
QA
It's well within engineering scope to implement these. Unit tests are an essential preliminary step before any reasonable refactor or rewrite could possibly take place. By 'unit test' I mean wrap each function call with corresponding code that proves the code works for all known conditions. In complex retrofits this may not actually happen at the most granular level but any automated tests will help immensely.
And QA - have an independent (and aggressive) quality assurance team that rigorously tests beta releases before production. Their test plans and test procedures become essential for any kind of replacement effort.
Once you've got the code under control, then you are in a position where the business can reasonably consider massive changes.
Just a note about your comment about external consultants - no consultancy will ever care enough about the code to provide realistic quality assurance. QA ends up being married to the hip of business defending the company bottom line. It's an internal function ultimately and an external consultant can't provide much more than getting you started really.
I think that your description provides all of the necessary information on code quality (lack thereof). The fact that so many support resources are required also indicates the high costs involved with maintaining the existing system.
As I answered here, a good approach to consider is refactoring one piece of the system at a time until everything works at an acceptable level. I agree with Joel re not throwing away existing code (see Things You Should Never Do. Parts of your code work, so you should leave those in place whenever possible, and focus on the sections that lead to downtime.
Andy also makes a great point about starting small as well.
Another thing to try, is reviewing the processes around the system. When you do this, you should try to determine what failure situations are caused directly or indirectly by user action?, are there configuration or environment problems? If you are having trouble fixing the code directly, then you can still prop it up by dealing with external issues more effectively.
Read the book Working Effectively with Legacy Code (also see the short PDF version) and surround the code with automated tests, as instructed in that book.
Refactor the system little by little. If you rewrite some parts of the code, do it a small subsystem at a time. Don't try to make a Grand Redesign.
The code has been around for 30 years?
Development paradigms have shifted substantially in the last three decades in many ways, and most relevant to your predicament, I feel, is in terms of the amount of time (in man days) required to create something to input->process->output something.
300,000 lines of code 30 years ago, could probably fit into 100,000 lines or less today, and expending fewer man hours(?) This could seem optimistic/ridiculous to some, but on the other hand is achievable, depending on the type of application in question. You have given no indication as to the classification of system - is it a real-time manufacturing process control system of sorts with sensors and actuators tied to it? An airline booking system ? Does it post-process some backlog of data? In other words could it be rebuilt in something like Java and quickly with an agressive, smallish team? Have the requirements been documented, and if so do they need updating or redeveloping from scratch? Is human safety a factor?
Just a quick sanity check, I think whether or not you should rebuild depends on (any order means the same thing):
Number of code dudes required.
Level of expertise of said dudes.
Which languages do not fit.
Which languages do fit.
How much it costs to use chosen language(s) them in terms of hardware and software.
How much does the business depend on this to stay alive.
Is it really too much downtime, or are you just nitpicking? (maybe they really don't care, but pretend to).
Good luck with that!

Do you inflate your estimated project completion dates? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
If so why? How much?
I tend to inflate mine a little because I can be overly optimistic.
Hofstadter's Law: Any computing project will take twice as long as you think it will — even when you take into account Hofstadter's Law.
If you inflate your estimate based on past experiences to try and compensate for your inherent optimism, then you aren't inflating. You are trying to provide an accurate estimate. If however you inflate so that you will always have fluff time, that's not so good.
Oh yes, I've learnt to always multiply my initial estimation by two. That's why FogBUGZ's Evidence-Based Scheduling tool is so really useful.
Any organization that asks its programmers to estimate time for coarse-grained features is fundamentally broken.
Steps to unbreak:
Hire technical program managers. Developers can double as these folks if needed.
Put any feature request, change request, or bug into a database immediately when it comes in. (My org uses Trac, which doesn't completely suck.)
Have your PMs break those requests into steps that each take a week or less.
At a weekly meeting, your PMs decide which tickets they want done that week (possibly with input from marketing, etc.). They assign those tickets to developers.
Developers finish as many of their assigned tickets as possible. And/or, they argue with the PMs about tasks they think are longer than a week in duration. Tickets are adjusted, split, reassigned, etc., as necessary.
Code gets written and checked in every week. QA always has something to do. The highest priority changes get done first. Marketing knows exactly what's coming down the pipe, and when. And ultimately:
Your company falls on the right side of the 20% success rate for software projects.
It's not rocket science. The key is step 3. If marketing wants something that seems complicated, your PMs (with developer input) figure out what the first step is that will take less than a week. If the PMs are not technical, all is lost.
Drawbacks to this approach:
When marketing asks, "how long will it take to get [X]?", they don't get an estimate. But we all know, and so do they, that the estimates they got before were pure fiction. At least now they can see proof, every week, that [X] is being worked on.
We, as developers, have fewer options for what we work on each week. This is indubitably true. Two points, though: first, good teams involve the developers in the decisions about what tickets will be assigned. Second, IMO, this actually makes my life better.
Nothing is as disheartening as realizing at the 1-month mark that the 2-month estimate I gave is hopelessly inadequate, but can't be changed, because it's already in the official marketing literature. Either I piss off the higher-ups by changing my estimate, risking a bad review and/or missing my bonus, or I do a lot of unpaid overtime. I've realized that a lot of overtime is not the mark of a bad developer, or the mark of a "passionate" one - it's the product of a toxic culture.
And yeah, a lot of this stuff is covered under (variously) XP, "agile," SCRUM, etc., but it's not really that complicated. You don't need a book or a consultant to do it. You just need the corporate will.
The Scotty Rule:
make your best guess
round up to the nearest whole number
double that quadruple that (thanks Adam!)
increase to the next higher unit of measure
Example:
you think it will take 3.5 hours
round that to 4 hours
quadruple that to 16 hours
shift it up to 16 days
Ta-daa! You're a miracle worker when you get it done in less than 8 days.
Typically yes, but I have two strategies:
Always provide estimates as a range (i.e. 1d-2d) rather than a single number. The difference between the numbers tells the project manager something about your confidence, and allows them to plan better.
Use something like FogBugz' Evidence Based-Scheduling, or a personal spreadsheet, to compare your historical estimates to the time you actually took. That'll give you a better idea than always doubling. Not least because doubling might not be enough!
I'll be able to answer this in 3-6 weeks.
It's not called "inflating" — it's called "making them remotely realistic."
Take whatever estimate you think appropriate. Then double it.
Don't forget you (an engineer) actually estimate in ideal hours (scrum term).
While management work in real hours.
The difference being that ideal hours are time without interuption (with a 30 minute warm up after each interuption). Ideal hours don't include time in meetings, time for lunch or normal chit chat etc.
Take all these into consideration and ideal hours will tend towards real hours.
Example:
Estimated time 40 hours (ideal)
Management will assume that is 1 week real time.
If you convert that 40 hours to real time:
Assume one meeting per day (duration 1 hour)
one break for lunch per day (1 hour)
plus 20% overhead for chit chat bathroom breaks getting coffie etc.
8 hour day is now 5 hours work time (8 - meeting - lunch - warm up).
Times 80% effeciency = 4 hours ideal time per day.
This your 40 hour ideal will take 80 hours real time to finish.
Kirk : Mr. Scott, have you always multiplied your repair estimates by a factor of four?
Scotty : Certainly, sir. How else can I keep my reputation as a miracle worker?
A good rule of thumb is estimate how long it will take and add 1/2 again as much time to cover the following problems:
The requirements will change
You will get pulled onto another project for a quick fix
The New guy at the next desk will need help with something
The time needed to refactor parts of the project because you found a better way to do things
<sneaky> Instead of inflating your project's estimate, inflate each task individually. It's harder for your superiors to challenge your estimates this way, because who's going to argue with you over minutes.
</sneaky>
But seriously, through using EBS I found that people are usually much better at estimating small tasks than large ones. If you estimate your project at 4 months, it could very well be 7 month before it's done; or it might not. If your estimate of a task is 35 minutes, on the other hand, it's usually about right.
FogBugz's EBS system shows you a graph of your estimation history, and from my experience (looking at other people's graphs as well) people are indeed much better at estimating short tasks. So my suggestion is to switch from doing voodoo multiplication of your projects as totals, and start breaking them down upfront into lots of very small tasks that you're much better at estimating.
Then multiply the whole thing by 3.14.
A lot depends on how detailed you want to get - but additional 'buffer' time should be based on a risk assessment - at a task level, where you put in various buffer times for:
High Risk: 50% to 100%
Medium Risk: 25% to 50%
Low Risk: 10% to 25% (all dependent on prior project experience).
Risk areas include:
est. of requirement coverage (#1 risk area is missing components at the design and requirement levels)
knowledge of technology being used
knowledge/confidence in your resources
external factors such as other projects impacting yours, resource changes, etc.
So, for a given task (or group of tasks) that cover component A, initial est. is 5 days and it's considered a high risk based on requirements coverage - you could add between 50% to 100%
Six weeks.
Industry standard: every request will take six weeks. Some will be longer, some will be shorter, everything averages out in the end.
Also, if you wait long enough, it no longer becomes an issue. I can't tell you how many times I've gone through that firedrill only to have the project/feature cut.
I wouldn't say I inflate them, so much as I try to set more realistic expectations based on past experience.
You can calculate project durations in two ways - one is to work out all the tasks involved and figure out how long each will take, factor in delays, meetings, problems etc. This figure always looks woefully short, which is why people always say things like 'double it'. After some experience in delivering projects you'll be able to tell very quickly, just by looking briefly at a spec how long it will take, and, invariably, it will be double the figure arrived at by the first method...
It's a better idea to add specific buffer time for things like debugging and testing than to just inflate the total time. Also, by taking the time up front to really plan out the pieces of the work, you'll make the estimation itself much easier (and probably the coding, too).
If anything, make a point of recording all of your estimates and comparing them to actual completion time, to get a sense of how much you tend to underestimate and under what conditions. This way you can more accurately "inflate".
I wouldn't say I inflate them but I do like to use a template for all possible tasks that could be involved in the project.
You find that not all tasks in your list are applicable to all projects, but having a list means that I don't let any tasks slip through the cracks with me forgetting to allow some time for them.
As you find new tasks are necessary, add them to your list.
This way you'll have a realistic estimate.
I tend to be optimistic in what's achievable and so I tend to estimate on the low side. But I know that about my self so I tend to add on an extra 15-20%.
I also keep track of my actuals versus my estimates. And make sure the time involved does not include other interruptions, see the accepted answer for my SO question on how to get back in the flow.
HTH
cheers
I wouldn't call additional estimated time on a project "inflated" unless you actually do complete your projects well before your original estimation. If you make a habit of always completing the project well before your original estimated time, then project leaders will get wise and expect it earlier.
What are your estimates based on?
If they're based on nothing but a vague intuition of how much code it would require and how long it would take to write that code, then you better pad them a LOT to account for subtasks you didn't think of, communication and synchronization overhead, and unexpected problems. Of course, that kind o estimate is nearly worthless anyway.
OTOH, if your estimates are based on concrete knowledge of how long it took last time to do a task of that scope with the given technology and number of developers, then inflation should not be necessary, since the inflationary factors above should already be included in the past experiences. Of course there will be probably new factors whose influence on the current project you can't foresee - such risks justify a certain amount of additional padding.
This is part of the reason why Agile teams estimate tasks in story points (an arbitrary and relative measurement unit), then as the project progresses track the team's velocity (story points completed per day). With this data you can then theoretically compute your completion date with accuracy.
I take my worst case scenario, double it, and it's still not enough.
Under-promise, over-deliver.
Oh yes, the general rule from long hard experience is give the project your best estimate for time, double it, and that's about how long it will actually take!
We have to, because our idiot manager always reduces them without any justification whatever. Of course, as soon as he realizes we do this, we're stuck in an arms race...
I fully expect to be the first person to submit a two-year estimate to change the wording of a dialog.
sigh.
As a lot said, it's a delicate balance between experience and risk.
Always start by breaking down the project in manageable pieces, in fact, in pieces you can easily imagine yourself starting and finishing in the same day
When you don't know how to do something (like when it's the first time) the risk goes up
When your risk goes up, that's where you start with your best guess, then double it to cover some of the unexpected, but remember, you are doing that on a small piece of the project, not the whole project itself
The risk goes up also when there's a factor you don't control, like the quality of an input or that library that seems it can do everything you want but that you never tested
Of course, when you gain experience on a specific task (like connecting your models to the database), the risk goes down
Sum everything up to get your subtotal...
Then, on the whole project, always add about another 20-30% (that number will change depending on your company) for all the answers/documents/okays you will be waiting for, meetings we are always forgetting, the changes of idea during the project and so on... that's what we call the human/political factor
And again add another 30-40% that accounts for tests and corrections that goes out of the tests you usually do yourself... such as when you'll first show it to your boss or to the customer
Of course, if you look at all this, it ends up that you can simplify it with the magical "double it" formulae but the difference is that you'll be able to know what you can squeeze in a tight deadline, what you can commit to, what are the dangerous tasks, how to build your schedule with the important milestones and so on.
I'm pretty sure that if you note the time spent on each pure "coding" task and compare it to your estimations in relation to its riskiness, you won't be so far off. The thing is, it's not easy to think of all the small pieces ahead and be realistic (versus optimistic) on what you can do without any hurdle.
I say when I can get it done. I make sure that change requests are followed-up with a new estimation and not the "Yes, I can do that." without mentioning it will take more time. The person requesting the change will not assume it will take longer.
Of course, you'd have be to an idiot not to add 25-50%
The problem is when the idiot next to you keeps coming up with estimates which are 25-50% lower than yours and the PM thinks you are stupid/slow/swinging it.
(Has anyone else noticed project managers never seem to compare estimates with actuals?)
I always double my estimates, for the following reasons:
1) Buffer for Murphy's Law. Something's always gonna go wrong somewhere that you can't account for.
2) Underestimation. Programmers always think things are easy to do. "Oh yeah, it'll take just a few days."
3) Bargaining space. Upper Management always thinks that schedules can be shortened. "Just make the developers work harder!" This allows you to give them what they want. Of course, overuse of this (more than once) will train them to assume you're always overestimating.
Note: It's always best to put buffer at the end of the project schedule, and not for each task. And never tell developers that the buffer exists, otherwise Parkinson's Law (Work expands so as to fill the time available for its completion) will take effect instead. Sometimes I do tell Upper Management that the buffer exists, but obviously I don't give them reason #3 as justification. This, of course depends on how much your boss trusts you to be truthful.

Will 20 year old compatibility issues exist 20 years in the future?

There is nothing like the 43rd day of your life spent tracking down issues due to CR/LF, different slash types, or a Big Endian vs. Little Endian bug. These issues are 20 years old and they make me feel as though humans are still cavemen. Are we simply replacing these old issues for new ones? XML has helped but aren't these issues costing companies millions in time, money, and effort? Is it a conspiracy to promote vendor-lockin?
Yes.
However I don't think it's a conspiracy as such. "Never attribute to malice that which can be explained by incompetence".
I reckon we're stuck with the old probs, and every day we get a slew of new ones too. It's not to do with vendor-lockin, more the way in which we think, realistically we are still cavemen our brains haven't changed much in 20,000 years and we carry on making the same mistakes.
You've touched on a much bigger philosophical observation than just coding, it applies to most aspects of human life.
There's always the upcoming Unix date overflow in 2038 or so.
Unlike physical constructs like token ring networks software and data is intangible. I think data-formatting things CR/LF problems will still persist 20 years in the future (especially considering they are not solved now).
You can make a judgment call on each item. If programs are unable to read big or little endian, the data will be converted and it will eventually die off. But if programs continue in the Robustness Principle - things like CR/LF, Big Little Endian, and the mismash of HTML will persist for a very long time.

Resources