Will 20 year old compatibility issues exist 20 years in the future? - endianness

There is nothing like the 43rd day of your life spent tracking down issues due to CR/LF, different slash types, or a Big Endian vs. Little Endian bug. These issues are 20 years old and they make me feel as though humans are still cavemen. Are we simply replacing these old issues for new ones? XML has helped but aren't these issues costing companies millions in time, money, and effort? Is it a conspiracy to promote vendor-lockin?

Yes.
However I don't think it's a conspiracy as such. "Never attribute to malice that which can be explained by incompetence".

I reckon we're stuck with the old probs, and every day we get a slew of new ones too. It's not to do with vendor-lockin, more the way in which we think, realistically we are still cavemen our brains haven't changed much in 20,000 years and we carry on making the same mistakes.
You've touched on a much bigger philosophical observation than just coding, it applies to most aspects of human life.

There's always the upcoming Unix date overflow in 2038 or so.

Unlike physical constructs like token ring networks software and data is intangible. I think data-formatting things CR/LF problems will still persist 20 years in the future (especially considering they are not solved now).
You can make a judgment call on each item. If programs are unable to read big or little endian, the data will be converted and it will eventually die off. But if programs continue in the Robustness Principle - things like CR/LF, Big Little Endian, and the mismash of HTML will persist for a very long time.

Related

are projects with high developer turn over rate really a bad thing? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've inherited a lot of web projects that experienced high developer turn over rates. Sometimes these web projects are a horrible patchwork of band aid solutions. Other times they can be somewhat maintainable mosaics of half-done features each built with a different architectural style. Everytime I inherit these projects, I wish the previous developers could explain to me why things got so bad.
What puzzles me is the reaction of the owners (either a manager, a middle man company, or a client). They seem to think, "Well, if you leave, I'll find another developer, because you're expendable." Or they think, "Oh, it costs that much money to refactor the system? I know another developer who can do it at half the price. I'll hire him if I can't afford you." I'm guessing that the high developer turn over rate is related to the owner's mentality of "My ideas are always great ideas, and if you don't agree, I'll find another (possibly cheaper) developer who agrees with me and does what I want". For the owners, the approach seems to work because their business is thriving. Unfortunately, it's no fun for developers because they go AWOL after 3-4 months of working with poor code, strict timelines, and insufficient client feedback.
So my question is the following:
Are the following symptoms of a project really such a bad thing for business?
high developer turn over rate
poorly built technology - often a patchwork of different and inappropriately used architectural styles
owners without a clear roadmap for their web project, and they request features on a whim
I've seen numerous businesses prosper with the symptoms above. So as a programmer, even though my instincts tell me the above points are terrible, I need to take a step back and ask, "are things really that bad in the grand scheme of things?" If not, I will re-evaluate my approach to these projects..ie. Do I build long term solutions or band-aid solutions?
** At the risk of this post being closed as non-programming related, I'd like to argue that I think it is programming related because answers to this question will influence the way a developer approaches a project. He will have a better feel for how far in advance he should plan his development (ie. build short term or long term solution) knowing he may quit at any moment.
All three symptoms are bad. They really are a bad thing for business. That being said:
Software development exists to make tools. That's it. It's not an end, in and of itself - you're a tool maker.
There are very successful businesses that operate using poor tools. They may not be run as well as they should be, but good results can, and often do, come from bad tools. Also remember, though, that eliminating your three symptoms will likely make the company even more effective, especially in the long term.
High dev turnover is a symptom, not a cause. The cause is bad management. If those businesses prosper, it's usually in the short term and usually precedes a buyout, a merger, or an outright failure. I've seen it happen over and over.
If you can afford - run. There are bad companies out there but there are good ones too - at least better than the mess you describe.
All those 3 things are not good let me focus on turnover.
I'm seeing it happen right now. management/company are being cheap so they don't care much about the team, techonology or process, just the bottomline. So in turn (eventually) team members don't care about the project, just THEIR bottomline. After several months, they decide it's not worth the stress and move on. We are a small team of 6 developers, this year 3 people want out and it's just July. 2 people came in, one more is coming. Seems all we're doing is transition and project turnover. Team does not mature and is ineffective. our customer senses this, and instead of giving the team more projects (more money for company) they limit it to certains apps. I wonder when management will realize that being cheap is costly!
If I may take a Devil's Advocate view on this for a moment:
Some people like a challenge. Achieving extremely difficult things are very exciting for some people and there are some developers that enjoy finding those uber hard problems and work on those. Having something difficult to do appeals to some people.
The turnover means that each time someone is starting from scratch rather than retaining all the ideas and thoughts that the previous developer had in building out the software, whatever it was intended to do. Sometimes multiple heads can make a good thing. After all, how many people developed Windows 7? ;)
The poorly built point is where someone may think, "Oh, I can shine here by fixing some of this stuff," and at times it can work for a while. Ka-ching!
The lack of a roadmap and almost advocating the "Cowboy coding" style may appeal to those that want great autonomy and just move to their own beat. After all, who needs methodologies and best practices when one has supernatural powers to use to make this awesome stuff that will take no time at all?
There is the question of what is the root cause of the turn over rate for developers? Is it just that the project is killing developers or the pay is so bad almost anywhere else would be better or something else? Just something to ponder here as there can be many a way to get rid of a developer, both literally and figuratively.
To be serious about this for a moment, there are some people that do enjoy high pressure situations and others that want to avoid them at all costs. Most people are somwhere between the two extremes. Where do you think you fall on that scale though?
I'll address each of your three points in turn. High turnover in any industry is considered bad for business and a management problem. However, I've read several books about corporate politics and cultures and the effect those have on the corporate bottom line. One book I read studied several major corporations over a 20-year span. It found that poisonous cultures grow slowy and tend to be "lagging indicators" of bottom line performance problems. It also found that when some of the companies were able to hire new CEO's who ultimately "turned the ship around", it took 10 - 15 YEARS to stop the bleeding. So in a VERY big picture view, yes turnover is poisonous, although it truly is a symptom of the larger problem. A symptom that should not be ignored. (Even though it usually is ignored for long periods of time. Ever notice that it takes HR a very long time to realize that a department's turnover might be tied to a bad manager?)
Poorly built technical infrastructure - or products that are sold to customers are obviously bad for the bottom line. I think that only non-technical people fail to understand this. Of course there is a range between "not optimal but works" and "barely works as long as you restore the database once a week it gets us by". I think the reason this happens is that the cost portion of the "holy trinity" is always chosen in favor of quality. In my experience this is guaranteed to be a hard and fast rule. If management has to choose between cost, quality and schedule, quality is always the first tossed to the wayside.
The problem of owners without a clear roadmap and feature creep are a symptom of lack of business discipline. Feature creep costs money. And when it's bad enough, it can actually prevent anything from being completed.
The interesting thing to me about your question is that you say that they are thriving as a company, so it makes me wonder if the technology is as important to them. Maybe the problem is that they don't see the value in better technology (and they might be right in their case, I'm not sure what kind of business they are).
In general, a very high turn over rate for employees isn't good in any company. When it comes to software, high developer turn over is bad because of all the tutoring that has to be done for the new one, and the "big picture" knowledge that goes out of the door. So if software is important for the business, high turnover rate is bad for business.
Only doing requested features without a roadmap is a one way path to bloatware. If you have no clear strategy, goal or purpose for a product, your only source for what to do is customer requests, which might be bad. This is so because the customers might actually not know what they want, thereby requesting features they won't use.
From one perspective, the attitude(s) you're quoting are understandable. Software development isn't cheap, and most people/businesses are trying to save money everywhere they can. However I think they're usually shooting themselves in the foot with this sort of behavior.
One suggestion for dealing with this is to get a copy of The Mythical Man Month and read the section on why adding more programmers to a late project will only make it later (it's the title - and second (in my copy) - essay). Many of the same ideas apply to replacing a developer ... except that if you're working solo, it's likely you may as well start over, as figuring out what the previous person did may take longer than starting from scratch. After you've read the essay, give a copy to anyone who's taking the attitude you cite and ask them to read it. No guarantee that it will help, but it's worth a try.

How much time in a week a programmer should spend on coding and learning [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am final year college student. I am trying to figure out how much time i should spend on coding and learning.
while (true) {
learn;
code;
}
I have two friends in my university, both studying media informatics, and both were absolute beginners in programming.
The first one reads a lot at home if he has to learn new languages for a project but has never had a private programming project.
The second one reads a bit but has his own python project. A web application for his friends, where you can bet on soccer results.
Both in comparison:The first guy is slow in programming and always stumble upon simple things and his code can be optimized (in line numbers and comments) at least by 5. And in two days he will stumble upon the same issue again...
The second guy is much faster, can easily read foreign code and languages and stumble upon one problem at most two times, the third time he used what he has learned....
So imho, doing your own project, where you code because you love it, where you work until morning to fix a bug or to finish an implementation, is the very best way to learn!
Surely you've realized that putting a finite measurement on the amount of time you should spend coding, is futile and hugely irrelevant.
Do what you want, but always try and keep up to date.
When I first started programming, it seems that I learn new things by leaps and bounds. Functions, classes, inheritance and etc. But after a while, I realize that you learn by coding. I load myself up with tonnes of reading material - Effective C++, Modern C++, but nothing beats them when I actually sat down and code.
Of course, writing your code the same way over and over again does not make you a better programmer. You have to learn to think - how do I make it reusable? less error prone? portable? immune to changes in other areas of the application? easier to maintain? Is there a better way to do this?
Eventually, the learning peaks, and what you learn are what I like to term as multipliers. It's like knowing that dirname(__FILE__) in PHP returns the current directory which an include file is in. It's like finding out what's a ORM and how by abstracting away the DB you can focus more on the code logic rather than an endless routines of writing INSERTS and UPDATEs SQL statements. It's like learning smart pointers and effective use of STL in C++, using Firebug effectively when doing JavaScript/CSS/HTML...and lots more.
So code; once you get frustrated about something ("There must be a better way to do this than now!"), search for a better way - this is how I learn, anyway.
When I was young:
Monday to Friday, 10am to 7pm, programming in office
Saturday afternoon, reading in Chapters
Monday to Saturday, 9pm to 1am, programming at home
Sunday, drive to downtown and pick up a few books from the bookstore
those were the days when Google was know as nntp
These days:
Monday to Friday, 10am to 7pm, coding in office (too bad I am on web now ;-)
9pm to 1am, coding on my MacBook Air on a few iPhone projects
Saturday and Sunday, coding for another 16 hours
too bad, Google interrupts me too much and I cannot count how many hours are spent on reading blog and pdf books ...
You have to decide that yourself. If you're constantly feeling like you should spend more time coding, then you're probably right. You should never force yourself to the point where the sight of a curly bracket makes you want to puke either. If you're sufficiently interested in programming then the amount of time you spend naturally without slacking off/burning out, will be just fine. (And if you're not, you should cut your losses as soon as possible.)
Be sure that this kind of approach does not make you less valuable of a programmer than the angry nerd in your class who spends every waking hour coding as a part of his masterplan to get back at the world.
the simple answer: do not create some sort of a schedule
why?
you never can know ahead what situation you're in during a certain time, so let's say you set it at everyday at 10am, then suddenly your dog died today at 10am, your family called you to mourn over poor Snuffel...for hours; schedule's all ruined
so what do you do?
code up; if you get tired grab a book or read an article (articles today are really juicy), if you get tired of reading and coding, play games that rattle your brain (yet entertaining, something like Civilizations IV). if you're all rested, fire up your IDE and apply what you just read about. Don't worry if you get it all messy the first time (unless you're a mad genius who certainly will kill himself if he doesn't get something right on his first try).
Note: you should probably set a time for how long you play the game, though:)
My suggestion would be to discover your strengths and if learning is among them, then you may enjoy spending a lot of time learning so do what you want here. Of course one shouldn't go so overboard that things like hygiene get sacrificed, so do try to maintain some basic standard of existing which includes the basics of cleaning your place, yourself, and that kind of thing.
For myself, I'd say that I'm almost always trying to learn something, somewhere. Maybe it is learning about how much patience I have in traffic or how well can I handle this curveball that life has thrown me by my having to do things like income taxes and discover what has changed in the software or tax laws from the previous year. If you look at life as a series of opportunities, you may learn a lot in the world.
In my humble opinion most of the time you are programming. While you program you are learning from experience. This is one type of learning. Another type of learning comes from reading books and other resources (courses, internet, Development conventions). I use books to keep up with technology and to better understand what I am doing. I read almost every day from 0.5-1.0 hour. It depends on your free time and the type of person you are.
Please take into account that there are more ways to learn: code reviews, reading other people's code and I am sure that there are more that I didn't mention here.
Anyway, good luck.
I am guessing the 'learning' here means, acquiring new tips and tricks, grapsing new technology in the market and stay up to date with the trends in technology.
From my experience it is taking around 20% time for learning, and it is mainly because I work on all latest technologies from Microsoft like WPF/Silverlight/Surface. But this % of time will really depends on your personal interest/ organizational interest and the type of career growth you are looking forward to.
And if your job is merely converting domain/business logic to the code which doesn't involve critical technology roadblocks then it might be close to 0% time you need to spend on learning.
Since you didn't present any constraints or conditions in your question then the simplest answer I can give is:
Spend as much as you want.
The very fact you have to ask, might mean you are not ideally matched to writing code. First and foremost you should love coding, and finding out how things work.
This is not a profession where you ever stand still. Totally agree with another poster, in that you should always be looking for a better way, and recognising when there is no better way.
The best software workers - the rockstars if you will - are always on. Any situation can be a teaching. For instance, consider Gregor Hohpe's article Starbucks Does Not Use Two-Phase Commit, in which he analyses how the coffee vendor uses asynchronous processing to maximize throughput of customers' orders.
Coding == Learning
In my opinion.

How much of the Mythical Man Month still applies? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
This book was written in the era of time sharing systems, procedural programming, and about 30 fewer years in software engineering experience. With the improvement of things such as existing libraries, higher level languages, IDES, and the amount of documentation and examples available on the internet how much of the book still holds true?
While I can believe that adding new people to a project may initially slow it down I would think things such as unit testing, separation of concerns, and other forms of automation and design improvements would allow new members of a team to become productive faster then assumed in the book, assuming the project had good design documentation and processes in place.
I don’t have experience on large projects or with large teams so am interested to hear what those of you who do have experiences with them think.
edit:
I was wondering if new communication tools such as Wikis, instant messaging, and the internet in general decreased the time spent communicating. Based on everyones answers I would say that any increase in communications efficiency has been offset by increased complexity.
It is still as true today as the day it was written. This is because it is fundamentally about communication between people working on the same project, and none of the advances of the past 30 years have substantially changed that.
Of course, we have learned a lot in those 30 years, but all improvements in our tools and undertanding have been incremental, in accordance with Brooks' "no silver bullet" prediction.
Isn't this kind of like asking if Sun Tzu's Art of War is still applicable to warfare since we have modern equipment?
The book still has things to tell us, and I for one have experienced the problems in communication that increased team sizes bring. You should be aware that unit tests, separation of concerns etc. are not new concepts.
However, some things have not stood the test of time. I don't believe that writing ASCII flow charts in your code is a good idea, and the "surgical team" approach suggested has been tried by several people (Charles Simony at MS, most famously) and found not to work too well.
The idea isn't that "large teams don't work", it's that "throwing people/money at the problem isn't the answer". Things like unit testing, separation of concerns, etc. are doing other things rather than just throwing people at the problem. These other things allow you to carefully add more people in the right place to speed things up. If anything, the points you make support the ideas of the book.
Both of the famous Brooks writings, "No Silver Bullet" and "The Mythical Man-Month" are explanations of fundamental limitations, in programming languages and project management respectively.
While true that some of the chapters a bit farther than halfway through TMMM deal too heavily with the technology of the time, the remaining chapters are still as relevant today as they were when written.
In TMMM, Brooks follows a technique of "outline the problem", "show some false starts", and "propose my own solution". Some commentators above has pointed out that his own solutions may be considered outdated at this point, but his concise description of the problems inherent in large projects make the book worth reading.
One theme he keeps coming back to is communications overhead as a limiting factor for large teams. As a thought experiment, consider the effect of the Internet as a communications medium for programmers, and the catalyst that has been to large open source projects.
Personally, I would read the book just for the "The Joys of the Craft" section. I've never read anything that so elegantly describes what programming at it's best feels like.
(If you're curious, it's on page 7, and viewable in the amazon.com "Look Inside!" feature)
I certainly think things like "No Silver Bullet" are just as applicable today as they were decades ago, especially as we see more and more young people come into industry and think x is the latest and greatest killer language/technology and all the other technologies will die because of it.
Granted, the references to Ada or sharing computers are antiquated, but the concept of accidental and essential difficulties, buy vs. build, how code is complex by definition because we don't repeat parts, and all the other theoretical topics are still completely accurate and relevant.
The other argument for why TMMM is relevant is that, it's not really about software itself but about how programmers get things done. In this way, it's hard for it to become obsolete.
The two that stick out in my mind: "version 2" still applies and so does "adding more people not necessarily faster".
Spolsky discusses "version 2" in his own way. I do not recall if he specifically links to MMM but it is very similar in concept.
Communication has become a lot more efficient than when MMM was identified, however, I think it's all proportional. It takes a lot more to make software production ready than it did when MMM was written.
Someone said that everything in computer science was discovered in 1960 something and everything since then has been derivative.
Read TMM as a book outlining a problem, perhaps THE problem, in software engineering: its not the technology, its the people! All the improvements you mention spring from that core realization. They are all in place to solve the problems Brooks laid out. This is the book that I'm sure Kent Beck and Ward Cunningham and Alister Cockburn and Martin Fowler all read, took to heart and then began to craft their silver bullets.
I consider this to be one of the "must read" books for anyone who wants to understand the software development process.
The demand for development workforce has been growing rapidly last 40 years, and this need will not stop. Since the rate of clever (vide Joel’s “smart and get things done”) people in the population remains generally the same, educating more and more developers each year means that the average smartness of a developer is getting lower.
40 years ago, it was demigods who became developers; 20 years ago, it was a job for clever people, while now when I look at young CS students at my Alma Mater it seems that they are taking everyone who knows what a computer is.
This does not mean a disaster is coming – western world keeps importing smart people (or outsourcing work) from emerging markets or third world countries. The new development tools make it easier to develop good applications. Those factors seem to neutralize each other, making MM-M eternal true.
The Mythical Man Month is a very dated read, but the core truths still apply. Sure Brooks discusses the need for a secretary which is clearly not true today and his concept of a surgical team doesn't work well, but most of the book is still accurate. His insight that communication requirements increase along with the size of the team is still true. His observation that adding people to a late project makes it later has been born out by a lot of projects. Unit tests help a little, but they don't stop one from misunderstanding the code or asking a lot of questions. No Silver Bullet also continues to stand the test of time.
All of it. The simple fact is that software projects are nontrivial; we build our own domain knowledge, really, directly into our solutions. Domain knowledge transfer is costly, both for the transferrer and for the transferee; this has not changed. And I, for one, believe it never will, no matter what the practices and tools used. Things may get marginally better, but the simple fact is that teaching and learning are both expensive and difficult things, and there's just no way to avoid them.
The social factors are still present, because humans are still essentially the same beasts we were 50 years ago.
The technical examples are almost completely obsolete, and only make sense when you think about the 0.034 MIPS System/360 of 1964. When you've only got 8 KB of memory, suggesting that the user should be responsible for handling leap years, instead of wasting 26 bytes of system memory (as Brooks did) made sense, but today it seems downright silly. I don't know any system that small today -- your telephone is thousands of times more powerful than the most powerful OS/360 system. Today we know a lot more about usability and human-computer interaction, and making the user responsible for that category of thing is just crazy.
One programmer can now write more code/build more software than one programmer could back then, but adding the second developer is not going to produce twice as much.
If/when I get on a project with good design documentation and processes in place, I'll let you know if that improves anything.
If you think of a large project that is late, it is then most likely in crisis/panic mode, as with most things in this mode, the best answer is a half decent plan, simply throwing more resources at a crisis doesn't solve anything other than waste resources and compund the problem.
There is no substitute for scheduling (pronounced shed-uling), planning and decent management.
As with most of these "one liners" or "golden rules", consider them more guidelines (with context) than set in stone.

How long does it really take to do something? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I mean name off a programming project you did and how long it took, please. The boss has never complained but I sometimes feel like things take too long. But this could be because I am impatient as well. Let me know your experiences for comparison.
I've also noticed that things always seem to take longer, sometimes much longer, than originally planned. I don't know why we don't start planning for it but then I think that maybe it's for motivational purposes.
Ryan
It is best to simply time yourself, record your estimates and determine the average percent you're off. Given that, as long as you are consistent, you can appropriately estimate actual times based on when you believed you'd get it done. It's not simply to determine how bad you are at estimating, but rather to take into account the regularity of inevitable distractions (both personal and boss/client-based).
This is based on Joel Spolsky's Evidence Based Scheduling, essential reading, as he explains that the primary other important aspect is breaking your tasks down into bite-sized (16-hour max) tasks, estimating and adding those together to arrive at your final project total.
Gut-based estimates come with experience but you really need to detail out the tasks involved to get something reasonable.
If you have a spec or at least some constraints, you can start creating tasks (design users page, design tags page, implement users page, implement tags page, write tags query, ...).
Once you do this, add it up and double it. If you are going to have to coordinate with others, triple it.
Record your actual time in detail as you go so you can evaluate how accurate you were when the project is complete and hone your estimating skills.
I completely agree with the previous posters... don't forget your team's workload also. Just because you estimated a project would take 3 months, it doesn't mean it'll be done anywhere near that.
I work on a smaller team (5 devs, 1 lead), many of us work on several projects at a time - some big, some small. Depending on the priority of the project, the whims of management and the availability of other teams (if needed), work on a project gets interspersed amongst the others.
So, yes, 3 months worth of work may be dead on, but it might be 3 months worth of work over a 6 month period.
I've done projects between 1 - 6 months on my own, and I always tend to double or quadrouple my original estimates.
It's effectively impossible to compare two programming projects, as there are too many factors that mean that the metrics from only aren't applicable to another (e.g., specific technologies used, prior experience of the developers, shifting requirements). Unless you are stamping out another system that is almost identical to one you've built previously, your estimates are going to have a low probability of being accurate.
A caveat is when you're building the next revision of an existing system with the same team; the specific experience gained does improve the ability to estimate the next batch of work.
I've seen too many attempts at estimation methodology, and none have worked. They may have a pseudo-scientific allure, but they just don't work in practice.
The only meaningful answer is the relatively short iteration, as advocated by agile advocates: choose a scope of work that can be executed within a short timeframe, deliver it, and then go for the next round. Budgets are then allocated on a short-term basis, with the stakeholders able to evaluate whether their money is being effectively spent. If it's taking too long to get anywhere, they can ditch the project.
Hofstadter's Law:
'It always takes longer than you expect, even when you take Hofstadter's Law into account.'
I believe this is because:
Work expands to fill the time available to do it. No matter how ruthless you are cutting unnecessary features, you would have been more brutal if the deadlines were even tighter.
Unexpected problems occur during the project.
In any case, it's really misleading to compare anecdotes, partly because people have selective memories. If I tell you it once took me two hours to write a fully-optimised quicksort, then maybe I'm forgetting the fact that I knew I'd have that task a week in advance, and had been thinking over ideas. Maybe I'm forgetting that there was a bug in it that I spent another two hours fixing a week later.
I'm almost certainly leaving out all the non-programming work that goes on: meetings, architecture design, consulting others who are stuck on something I happen to know about, admin. So it's unfair on yourself to think of a rate of work that seems plausible in terms of "sitting there coding", and expect that to be sustained all the time. This is the source of a lot of feelings after the fact that you "should have been quicker".
I do projects from 2 weeks to 1 year. Generally my estimates are quite good, a posteriori. At the beginning of the project, though, I generally get bashed because my estimates are considered too large.
This is because I consider a lot of things that people forget:
Time for bug fixing
Time for deployments
Time for management/meetings/interaction
Time to allow requirement owners to change their mind
etc
The trick is to use evidence based scheduling (see Joel on Software).
Thing is, if you plan for a little extra time, you will use it to improve the code base if no problems arise. If problems arise, you are still within the estimates.
I believe Joel has wrote an article on this: What you can do, is ask each developer on team to lay out his task in detail (what are all the steps that need to be done) and ask them to estimate time needed for each step. Later, when project is done, compare the real time to estimated time, and you'll get the bias for each developer. When a new project is started, ask them to evaluate the time again, and multiply that with bias of each developer to get the values close to what's really expects.
After a few projects, you should have very good estimates.

How much of your work day is spent coding? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've been thinking about software estimation lately, and I have a bunch of questions around time spent coding. I'm curious to hear from people who have had at least a couple years of experience developing software.
When you have to estimate the amount of time you'll spend working on something, how many hours of the day do you spend coding? What occupies the other non-coding hours?
Do you find you spend more or less hours than your teammates coding? Do you feel like you're getting more or less work done than they are?
What are your work conditions like? Private office, shared office, team room? Coding alone or as a pair? How has your working condition changed the amount of time you spend coding each day? If you can work from home, does that help or hurt your productivity?
What development methodology do you use? Waterfall? Agile? Has changing from one methodology to another had an impact on your coding hours per day?
Most importantly: Are you happy with your productivity? If not, what single change would you make that would have the most impact on it?
I'm a corporate developer, the kind Joel Spolsky called "depressed" in a couple of the StackOverflow podcasts. Because my company is not a software company it has little business reason to implement many of the measures software experts recommend companies engage for developer productivity.
We don't get private offices and dual 30 inch monitors. Our source control system is Microsoft Visual Source Safe. Enough said. On the other hand, I get to do a lot of things that fill out my day and add some variety to my job. I get involved in business analysis, project management, development, production support, international implementations, training support, team planning, and process improvement.
I'd say I get 85% of my day to code, when I can focus and I have a major programming task. But more often I get about 50% of my day for coding. If production support (non coding-related) is heavy I may only get 15% of my day to code.
Most of the companies I've worked for were not actively engaged in evaluating agile processes or test-driven development, but they didn't do a good job of waterfall either; most of their developers worked like cut-and-paste cowboys with impugnity.
On occasion I do work from home and with kids, it's horrible. I'm more productive at work.
My productivity is good, but could be better if the interruption factor and cost of mental context switching was removed. Production support and project management overhead both create those types of interruptions. But both are necessary parts of the job, so I don't think I can get rid of them. What I would like to consider is a restructuring of the team so that people on projects could focus on projects while the others could block the interruptions by being dedicated to support. And then swapping when the project is over.
Unfortunately, no one wants to do support, so the other productivity improvement measure I'd wish for would be one of the following:
Better testing tools/methodologies to speed up unit testing
Better business analysis tools/skills to improve the quality of new development and limit its contributions to the production support load
Realistically, it probably averages out to 4 or 5 hours a day. Although its "lumpy" - there may be days where there could be 8 or 9 hours of it.
Of all the software developers I know, the ones that write production code (as opposed to research) 4 to 5 seems to be the max of actual coding. There is a lot of other stuff that goes on.
And to be honest there is a lot of procrastination. I find its a bit like writers block. sometimes its just hard to get started, but then a good 2 hour session is a LOT of work done. Its just all the preparation you have to go through, the experimentation to make sure you are taking the right approach. The endless amount of staring out the window and checking email etc...
I work a 37.5 hour week.
30 of those hours (80%) I am supposed to be billing our clients.
In reality I find that I use about 60% coding on actual client systems, 20% experimenting with new techniques and reading blogs, and 20% is wasted on office politics and "socializing".
Am I happy about it?
Do I wish that I could stare at the screen 30 hours a week coding on my given assignments?
Well. Since 20% of the time is used bettering myself at my craft, in the 60% that is effective coding I probably produce more than I would in 90% of my time if I didn't.
Then again, try to explain that fact to the higher ups ;)
Well, I generally come in at least
fifteen minutes late, ah, I use the
side door - that way Lumbergh can't
see me, heh heh - and, uh, after that
I just sorta space out for about an
hour.
...Yeah, I just stare at my desk; but
it looks like I'm working. I do that
for probably another hour after lunch,
too. I'd say in a given week I
probably only do about fifteen minutes
of real, actual, work.
For me, switching between projects is a big cause of procrastination. When I've just finished a project I tend to procrastinate on kicking off the next requirement assigned to me. My mind feels still like in coding mode, but I then have to estimate the expenses for creating a spec first. So I have to switch from coding to calling customers and the like, which feels uncomfortable.
What helps me most in being productive is to cut away any distraction in the first hours of the day and starting immediately with the day's most important task. I need to get into the flow as early as possible.
I recommend having a look at The Programmers’ Stone:
We know that stress impairs some cognitive functions. The loss of those functions can precisely explain why programming is hard, and show us many other opportunities to improve the ways we organize things. The consequences roll out to touch language, logic and cultural norms. Click here for the Introduction...
I spend about 40% of my day coding. 40% goes to non-coding activites (such as fighting with our sketchy build server or figuring out why NUnit failed with no error message again or trying to figure out why our code has stopped talking to the Oracle server downstaird... junk like that). The other 20% is usually spent procrastinating, or in meetings.
Am I happy with my productivity? Absolutely not. I work 7ish hours/day, and I spend about 2.5 of that coding. I would much rather be spending 5-6 hours of my day coding, with only an hour dedicated to all the other stuff (sadly, the one thing that would make that happen -- that the PM stop diddling with the build scripts every day -- isn't going to happen). Unfortunately, since I am a corporate developer, management doesn't see the time being frittered away. Because I get so much more done in that 40% of my day than most of the drones in the building get done in a week (including the PM), they think I'm productive.
#Bernard Dy: I have spent probably 30% of my career in corporate settings (am not at the moment). Usually its after some failed (or not failed, but fizzled) start up idea, or some kind of burnout/change. Its ok for a little bit, it is nice to meet people from totally different backgrounds (who would have thought that lawyers and actuaries could be so much fun to hang out with), but in the end, I just find it too hard to get up in the morning with motivation (or after a holiday dread going back) - probably for reasons like you define (just a lack of care). But its good experience and a source of ideas at the least. And you can meet brilliant people everywhere (its not just programmers who are smart - I always tried to seek out who the real brains were behind a business).
Interestingly the only time I have practiced strict agile/XP was in a corporate setting - in that case probably 7 hours a day was actual hands on code (in a pair) - I have never been so exhausted after a day of that. not sure if that is a good thing, perhaps I am just lazy.
To answer some of my own questions:
The current team I'm on does only does gross task estimation, so it's hard to track hours per days. I would say that, for my career, the time spent coding has been anywhere between 25% (mostly management) to 85%+ (working from home 4 days a week, get together for a meeting for half a day once a week). If I had to guess, though, the average is probably somewhere in the vicinity of 60%.
The biggest influence for me in time spent coding is the presence or absence of meetings. When I worked on agile projects with everybody in the same room, meetings tended to be ad-hoc and very short, so the time spent coding was very high. I also felt I spent less time -- sometimes a lot less time -- doing non-coding things when I was in a team room, because it's much easier to waste time, accidentally or otherwise, when nobody has a clear view of your monitor. :)
I do outsourcing and basically I code all day, I have two projects and I don't have much time to make anything else which it means that I can't take more work cause I could not finish anything, that is a good policy, you should take just as you can.
Remember also that that you should have spare time and very importantly is to rest enough because if you don't you won't be very productive. The key here is planning and discipline.
In my non-coding time I spent it with my wife, I also like to get out town and try not to think on my projects, the more I make this balance the more productive I am.
When I don't much work I like to read programing blogs and also I like to study programming.
And finally I would like to say that IMHO our carreer should not be seen as a work, instead you should see it like something fun.
I'm a software developer in an R&D department working 40 hour a week.
I spend like... 10% of my time actually coding.
In my non-coding hours I mostly test, evaluate, compare and put down results. I also spend a lot of time writing specification for the code I will write and researching for the code I will write, I participate in brainstorm meetings for the current projects, etc.
I could say that from my teammates (also software developers) I am the one who codes most at the moment; but in depends on which task we work at each time.
I would not quantify actually coding as working hard. If there is a good specification, a proper research and a good understanting of the project, coding is just a formality and goes on almost smoothly and quickly.
Here we have a sharred office, with two teams. We are mostly coding alone, rarely on a pair. My work changes a lot the amount of time I was coding; in the past I was spending most of my time coding, without a very good understanding of the coding. If I had a task I would immediately start coding, and re-coding each time I realised I did something wrong and so on. And it was very ineffective.
The development methodology is somewhere between prototyping and spiral now. It has clearly change the number of hour I code.
I am happy with my productivity, related to my deadlines and goals.

Resources