is it possible to extract compilation statistics from vs 2010? - visual-studio-2010

For instance, I want to find out how many times I built the solution in a day and how long the process took in total (per day).
I want to buy an ssd and I am not convinced it would save me a lot of time based on my current pattern of computer usage, so I want to go more scientifically about it. I spend most of my time typing, thinking as well, and compiling a C# solution with a few projects (on my current hdd it takes about 5-15s to compile the solution) and possibly in the future Java code. Everybody says they are very fast but the price holds me back.
Thanks

Related

How to measure productivity loss from slow PCs running Visual Studio? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Many PCs we have on the development team are out-dated and are very slow to run Visual Studio 2008. They should very much be replaced with newer machines. But there's a general reluctance on management/company to buy new machines.
How do we come up with numbers and benchmarks to show that these slow PCs are causing a loss in productivity?
Obviously we can't call them to sit down with us as we build solutions and/or open various files.
Is there an objective way to come up with some kind of reliable numbers that non-technical people can understand?
It'd be nice to have a way to measure this across an entire organization on many different PCs running Visual Studio. I'm looking for an answer that does better than using a physical stopwatch. :)
Modify your solutions so that the pre-build and post-build events write the current time to a centralised database. Include the machine name and the name of the project.
You can then display this information as graph showing time for build vs machine.
This should show a correlation between the build time and the age of the machine, hopefully showing the the older machines are slower. You could even convert the time into a $ (or £ or € ) value to show how much these older machines are costing. Summing this over time will give a value for the payback on any investment in new machines.
By modifying the solutions you can get this logging deployed onto all development machines by simply getting everyone to do a "get latest" from source control.
This does not really answer your question, but might help to achieve the required results. Tell your bosses that The Programmer's Bill of Rights is something to be taken seriously.
I would attempt to explain to them that programmers cost much more than machines. If you spend 30 minutes a day waiting, do the math and figure out what percentage of your salary is wasted due to laggy machines. Present these numbers to them, and compare it to the price of a new computer, and explain how they could save money in the long run by upgrading.
If they choose to continue to spend big bucks affording your wisdom only to have you sit there and watch a spinning cursor, just laugh because the joke is on them.
Many PHB's understand productivity in terms of lines of code (which IMO is very Wrong).
Can you record the amount of code produced per day on the slow machines vs not so slow machines?
Slow machines are the bane of development, IMHO, especially since any delay gets developers out of concentration and can lead to a much costlier switch to things like web browsers. There can be other weird effects like a how a slight increase in the latency for the Javadoc popup or C# equivalent) to appear when you hover a method and the chances someone would consult the documentation.
If legal in your company (at least for self use), record about half an hour of work with a screen capture tool like Camtasia. Then use a fast editor to spot the times the machine was hung (easy if you have a cursor change, progress bar, etc.) and count the time and number of instances. I've done that for hours of tapes - it doesn't take that long. Use these numbers to argue the case, though you need to also argue that it leads to the indirect costs like a context switch.
Also, in my experience, hard drives are often the major cause of slowdown, not CPU or RAM, and unfortunately most organizations skimp on fast hard drives or SSDs and have very strict rules about replacing them.
Don't forget to factor in the cost of time spent figuring out how much slow PCs cost you (this post in other words)!

When is the optimization really worth the time spent on it?

After my last question, I want to understand when an optimization is really worth the time a developer spends on it.
Is it worth spending 4 hours to have queries that are 20% quicker? Yes, no, maybe, yes if...?
Is 'wasting' 7 hours to switch a task to another language to save about 40% of the CPU usage 'worth it'?
My normal iteration for a new project is:
Understand what the customer wants, and what he needs;
Plan the project: what languages and where, database design;
Develop the project;
Test and bug-fix;
Final analysis of the running project and final optimizations;
If the project requires it, a further analysis on the real-life usage of the resources followed by further optimization;
"Write good, maintainable code" is implied.
Obviously, the big 'optimization' part happens at point #2, but often when reviewing the code after the project is over I find some sections that, even if they do their job well, could be improved. This is the rationale for point #5.
To give a concrete example of the last point, a simple example is when I expect 90% of queries to be SELECT and 10% to be INSERT/UPDATE, so I charge a DB table with indexes. But after 6 months, I see that in real-life there are 10% SELECT queries and 90% INSERT/UPDATEs, so the query speed is not optimized. This is the first example that comes to my mind (and obviously this is more a 'patch' to an initial mis-design than an optimization ;).
Please note that I'm a developer, not a business-man - but I like to have a clear conscience by giving my clients the best, where possible.
I mean, I know that if I lose 50 hours to gain 5% of an application's total speed-up, and the application is used by 10 users, then it maybe isn't worth the time... but what about when it is?
When do you think that an optimization is crucial?
What formula do you usually apply, aware that the time spent optimizing (and the final gain) is not always quantifiable on paper?
EDIT: sorry, but i cant accept an answer like 'untill people dont complain about id, optimization is not needed'; It can be a business-view (questionable, imho), but not an developer or (imho too) a good-sense answer. I know, this question is really subjective.
I agree with Cheeso, the performance optimization should be deferred, after some analysis about the real-life usage and load of the project, but a small'n'quick optimization can be done immediatly after the project is over.
Thanks to all ;)
YAGNI. Unless people complain, a lot.
EDIT : I built a library that was slightly slower than the alternatives out there. It was still gaining usage and share because it was nicer to use and more powerful. I continued to invest in features and capability, deferring any work on performance.
At some point, there were enough features and performance bubbled to the top of the list I finally spent some time working on perf improvement, but only after considering the effort for a long time.
I think this is the right way to approach it.
There are (at least) two categories of "efficiency" to mention here:
UI applications (and their dependencies), where the most important measure is the response time to the user.
Batch processing, where the main indicator is total running time.
In the first case, there are well-documented rules about response times. If you care about product quality, you need to keep response times short. The shorter the better, of course, but the breaking points are about:
100 ms for an "immediate" response; animation and other "real-time" activities need to happen at least this fast;
1 second for an "uninterrupted" response. Any more than this and users will be frustrated; you also need to start think about showing a progress screen past this point.
10 seconds for retaining user focus. Any worse than this and your users will be pissed off.
If you're finding that several operations are taking more than 10 seconds, and you can fix the performance problems with a sane amount of effort (I don't think there's a hard limit but personally I'd say definitely anything under 1 man-month and probably anything under 3-4 months), then you should definitely put the effort into fixing it.
Similarly, if you find yourself creeping past that 1-second threshold, you should be trying very hard to make it faster. At a minimum, compare the time it would take to improve the performance of your app with the time it would take to redo every slow screen with progress dialogs and background threads that the user can cancel - because it is your responsibility as a designer to provide that if the app is too slow.
But don't make a decision purely on that basis - the user experience matters too. If it'll take you 1 week to stick in some async progress dialogs and 3 weeks to get the running times under 1 second, I would still go with the latter. IMO, anything under a man-month is justifiable if the problem is application-wide; if it's just one report that's run relatively infrequently, I'd probably let it go.
If your application is real-time - graphics-related for example - then I would classify it the same way as the 10-second mark for non-realtime apps. That is, you need to make every effort possible to speed it up. Flickering is unacceptable in a game or in an image editor. Stutters and glitches are unacceptable in audio processing. Even for something as basic as text input, a 500 ms delay between the key being pressed and the character appearing is completely unacceptable unless you're connected via remote desktop or something. No amount of effort is too much for fixing these kinds of problems.
Now for the second case, which I think is mostly self-evident. If you're doing batch processing then you generally have a scalability concern. As long as the batch is able to run in the time allotted, you don't need to improve it. But if your data is growing, if the batch is supposed to run overnight and you start to see it creeping into the wee hours of the morning and interrupting people's work at 9:15 AM, then clearly you need to work on performance.
Actually, you really can't wait that long; once it fails to complete in the required time, you may already be in big trouble. You have to actively monitor the situation and maintain some sort of safety margin - say a maximum running time of 5 hours out of the available 6 before you start to worry.
So the answer for batch processes is obvious. You have a hard requirement that the bast must finish within a certain time. Therefore, if you are getting close to the edge, performance must be improved, regardless of how difficult/costly it is. The question then becomes what is the most economical means of improving the process?
If it costs significantly less to just throw some more hardware at the problem (and you know for a fact that the problem really does scale with hardware), then don't spend any time optimizing, just buy new hardware. Otherwise, figure out what combination of design optimization and hardware upgrades is going to get you the best ROI. It's almost purely a cost decision at this point.
That's about all I have to say on the subject. Shame on the people who respond to this with "YAGNI". It's your professional responsibility to know or at least find out whether or not you "need it." Assuming that anything is acceptable until customers complain is an abdication of this responsibility.
Simply because your customers don't demand it doesn't mean you don't need to consider it. Your customers don't demand unit tests, either, or even reasonably good/maintainable code, but you provide those things anyway because it is part of your profession. And at the end of the day, your customers will be a lot happier with a smooth, fast product than with any of those other developer-centric things.
Optimization is worth it when it is necessary.
If we have promised the client response times on holiday package searches that are 5 seconds or less and that the system will run on a single Oracle server (of whatever spec) and the searches are taking 30 seconds at peak load, then the optimization is definitely worth it because we're not going to get paid otherwise.
When you are initially developing a system, you (if you are a good developer) are designing things to be efficient without wasting time on premature optimization. If the resulting system isn't fast enough, you optimize. But your question seems to be suggesting that there's some hand-wavey additional optimization that you might do if you feel that it's worth it. That's not a good way to think about it because it implies that you haven't got a firm target in mind for what is acceptable to begin with. You need to discuss it with the stakeholders and set some kind of target before you start worrying about what kind of optimizations you need to do.
Like everyone said in the other questions answers is, when it makes monetary sense to change something then it needs changing. In most cases good enough wins the day. If the customers aren't complaining then it is good enough. If they are complaining then fix it enough so they stop complaining. Agile methodologies will give you some guidance on how to know when enough is enough. Who cares if something is using 40% CPU more CPU than you think it needs to be, if it is working and the customers are happy then it is good enough. Really simple, get it working and maintainable and then wait for complaints that probably will never come.
If what you are worried about was really a problem, NO ONE would ever have started using Java to build mission critical server side applications. Or Python or Erlang or anything else that isn't C for that matter. And if they did that, nothing would get done in a time frame to even acquire that first customer that you are so worried about losing. You will know well in advance that you need to change something well before it becomes a problem.
Good posting everyone.
Have you looked at the unnecessary usage of transactions for simple SELECT(s)? I got burned on that one a few times... I also did some code cleanup and found MANY graphs being returned for maybe 10 records needed.... on and on... sometimes it's not YOUR code per se, but someone cutting corners.... Good luck!
If the client doesn't see a need to do performance optimization, then there's no reason to do it.
Defining a measurable performance requirements SLA with the client (i.e., 95% of queries complete in under 2 seconds) near the beginning of the project lets you know if you're meeting that goal, or if you have more optimization to do. Performance testing at current and estimated future loads gives you the data that you need to see if you're meeting the SLA.
Optimization is rarely worth it before you know what needs to be optimized. Always remember that if I/O is basically idle and CPU is low, you're not getting anything out of the computer. Obviously you don't want the CPU pegged all the time and you don't want to be running out of I/O bandwidth, but realize that trying to have the computer basically idle all day while it performs intense operations is unrealistic.
Wait until you start to reach a predefined threshold (80% utilization is the mark I usually use, others think that's too high/low) and then optimize at that point if necessary. Keep in mind that the best solution may be scaling up or scaling out and not actually optimizing the software.
Use Amdal's law. It shows you which is the overall improvement when optimizing a certain part of a system.
Also: If it ain't broke, don't fix it.
Optimization is worth the time spent on it when you get good speedups for spending little time in optimizing (obviously). To get that, you need tools/techniques that lead you very quickly to the code whose optimization yields the most benefit.
It is common to think that the way to find that code is by measuring the time spent by functions, but to my mind that only provides clues - you still have to play detective. What takes me straight to the code is stackshots. Here is an example of a 40-times speedup, achieved by finding and fixing several problems. Others on SO have reported speedup factors from 7 to 60, achieved with little effort.*
*(7x: Comment 1. 60x: Comment 30.)

Good memory profiling, leak and error detection for Windows

I'm currently looking for a good memory / leak detection tool for Windows. A few years ago, I used Numega's Boundschecker, which was VERY good. Right now it seems to have been
sold to Compuware, which apparently sold it again to some other company.
Trying to evaluate a demo of the current version has been so far very frustrating, in the best "enterprisy" tradition:
(a) no advertised prices on their website (Great Red Flashing Lights of Warning);
(b) contact form asked for number of employeers and other private information;
(c) no response to my emails asking for a evaluation and price.
I had to conclude that BoundsChecker is now one of "those" products. Y'know, the type where you innocently call and tomorrow 3 men in black suits turn up at your
building wanting to talk to you about "partnerships" and not-so-secretly gauge the size of your company and therefore how much they can get away with charging you.
SO, rant aside, can anyone recommend an excellent memory checking/leak detection tool, how much it costs, and suggestions for where to buy?
You can try Memory Validator. You can try the evaluation copy of the same as well.
Licensed version prices
Beware of Compuware's bounds checker:
It is stable up to a point. It costs about 3600 dollars, and about an equal amount to maintain from year to year.
But that is peanuts compared to Coverity.
I haven't gotten a good test run to work right under Bounds Checker for the last 3 years. That is why I don't use it anymore, and why I don't recommend you use it, except on small, tiny projects. On big enterprise apps, it's just too slow, takes up too memory, and simply stops working. I mean really, do you want your application to take 5 minutes to boot? Do you want your test executions to take 3 times longer? Worst of all, is it's tendency to just lock up. Customer support from Compuware was pretty limited. But bounds checker was sold to another company (can't remember their name) whose website is so aniceptic, sterilized and dry, it makes financial company websites look entertaining.
But the killer problem with BoundsChecker is it is 32 bit only. So if you need to profile a large application that takes lots of memory (More than 1 Gig), you are simply out of luck. Bounds Checker will eat up 2 to 3 Gigs of memory from your app. And with 32 bit apps, you well know that 4 Gigs is the tops you get.
Coverity is great if you hire a person to babysit it. Seriously Coverity costs more than my house. That's not to mention the person my company would have too hire to babysit the dang thing. It takes 24 hours to do it's magic. And it doesn't do all that much more magic than simply compiling your code at warning level 4, and turning on 'Code Analysis' (In visual studio).
I've tried other memory leak tools (for native code). They all SUCK big time, are too complicated, or just plain old lock up the system.
I'm so disgusted with the entire field of memory profilers, that I just want to go back to using the debug CRT. That or just write my own.
As for code coverage tools, Bullseye wins hands down. Why can't a memory leak detector just work as solidly as bullseye?
Microsoft's Application Verifier tool is very good at detecting leaks as well as a bunch of other common programming mistakes on Windows (COM, heaps, TLS, locks, etc).
It doesn't do so much in the way of profiling, but it will give you the stack of where the memory was allocated when you leak it, or the stack where it was free'd the first time if you double free, etc.
I've been fairly happy with AQTime, and the pricing is tough to beat (and very transparent - $599/user).
The allocation profiler works fairly well - it's not quite as sophisticated as Boundschecker (from what I remember of Boundschecker), but what it does, it does well - and it handles quite a few other things, too.
This thread is way out of date. It is true that we haven't been able to convince Micro Focus to post prices out on their main web site, but you can get prices on ComponentSource, and we don't send out agents in dark suits and shades 8-/ Pricing depends on whether you are asking for a single user or multiple user license, and whether you want just BoundsChecker, or you want all of DevPartner Studio. See ComponentSource Listing for details.
Anyway, we've not stopped working on the product. On February 4th, we released version 10.5, which (finally) supports 64-bit applications (AMD64,Intel64, not Itanium) on Vista and Windows 7. Quite a few old bugs were fixed along the way. The next update will include support for XP64 and Windows 7 SP1, as well as Visual Studio 2010 SP1.

How to handle short coding sessions?

I'm working on some pet projects and generally I sit around my personal computer about 22:30 or 23:00 to code. But since I try to sleep about 24:00 I don't start coding and ending up reading articles, playing some games etc.
I don't feel like I can write decent code in an hour, because the project is quite big and I don't want to randomly or carelessly hack it. Even though I use TDD, most of the time stuff I'm doing is not straight forward which requires lots of testing before getting it right.
What's your approach to these kind of issues? Do you just code later when you got enough time or do you have a different approach which allows you to code just for 30 minutes and continue later?
I generally don't write lots of code until i have the time to do it. The reason is that for me to get effective takes focus and that takes a bit of time to be correctly focused. That said those 30min slots are great for
Writing more tests: nothing like trying to get to 100% code coverage, and it's not a big waste since you are investing
Research: I spend lots of time reading blogs, looking for frameworks I can use or tools. Spending 30min finding a framework that does 80% of a feature you need is much better than spending hours trying to code it. The other factor to this is that if you implement the framework and you find it is a bad fit you are better educated in the needs which means your development will be smoother.
Well my first thought was "use unit testing", but then I read you are already using this. But I still think it's the solution to your program.
Try to make your tests as small as possible and use the "1 assert per unit test" rule to create small atomic tests. You should be able to fix several of these small tests in a 30-minute session.
Here are some things you could try:
Don't sit near the computer. Instead, take a large piece of paper and go somewhere quiet. Think about what you want to accomplish. Write down interface ideas, detailed implementation. Make a list of questions you need to solve before you can go on.
Take off a week and code away. The ratio of getting-into-flow over flow-time is just too bad for 30 minutes.
Keep a log about what you do instead of coding. Observe your emotional state.
Go to bed early and try to have your pet coding session very early in the morning.
A small tip (that I use at work too) is to stop coding in the middle of something, with an obvious big red compile error waiting.
The next time you start working, the error will actually help you to remember what on earth you were doing.
While you are working on the small problem, the big picture clears up and then you can continue designing.
Developing with such short times is difficult, but you can still get something of that time. Unit testing is one. Writing down the interface of a class is another. While coding the real stuff will take much more time, these tasks are essentially a no-brainer, and they are just an exercise in typing.
So, my suggestion is: focus on small tasks that do not require thinking and concentration, and can be completed in the timespan you have.
Never worked for me, pet projects are usually too interesting and I end up working to late hours of the night or through a weekend.
I would suggest reconsidering your priorities - if all the time you have available is one hour late at night, maybe it's better to spend it on games, articles etc. Or just hanging out. When you have a bit more time, say a lazy Sunday, spend all the time at once and actually get the sense of accomplishing something that pet projects are supposed to give.
Here's what I do with my personal projects outside of work:
1) I try to give myself a good map of my project by planning it out on paper. I diagram all of my objects, data structures, and/or SQL tables and determine what basic functions and interactions between those components are necessary. I may write some actual code during this phase if the solution is obvious, but typically not.
2) Once the big picture is in place, I prioritize the most basic and critical elements. I also try to figure out which parts will be easier to write than others.
3) After priorities have been set I start working on the easiest and most critical parts first and work gradually towards the more complex and less important components. Breaking each task down into smaller parts tends to help. For example, I may design a database table first and the next day create a data interface class that controls interactions with that table.
4) Unit testing really helps me feel a sense of accomplishment, even if 30 minutes of effort only results in a few quality lines code.
5) Keep a change log, even if it isn't very detailed. I've found my change logs to be invaluable if I work on a big project in many short spurts over an extended period of time.
Those steps right there help me the most. In the end I am able to identify small chuncks of the project that can typically be completed in about 30-60 minutes. Of course, as a project develops, I usually have to re-evaluate something and go back to the beginning for a while when I discover I left something out of the planning phases. Sometimes I need to go a bit farther and give myself a time line with some deadlines and make sure I celebrate milestones with a personal reward. If you have a tendency of staying up till the wee hours of the morning, something I struggle with, I also suggest giving yourself a coding curfew as well. I also try to make sure my "coding computer" doesn't have many distractions on it, such as games.
There's always one more bug. If not, there's one more neat feature you can add, and THAT will add more bugs. Which is one reason why I think that the use of the phrase "All you have to do..." by anyone in (or using) IT should be a hanging offense.
I can cut down on how long my coding sessions last by doing trivial things, thinking things out while away from the keyboard (the shower is best - or while in bed at 4 a.m.) and by using lightweight environments such as script languages, but "quick" coding sessions are something I long ago gave up hope on.
Just shifting mental gears into the coding mode takes time, picking up the threads of where I was before takes time, discovering that my "quick and easy solution" was neither of the above takes time. Fixing my "quick and easy solution takes time, debugging - more time, and so forth.

How much of your work day is spent coding? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've been thinking about software estimation lately, and I have a bunch of questions around time spent coding. I'm curious to hear from people who have had at least a couple years of experience developing software.
When you have to estimate the amount of time you'll spend working on something, how many hours of the day do you spend coding? What occupies the other non-coding hours?
Do you find you spend more or less hours than your teammates coding? Do you feel like you're getting more or less work done than they are?
What are your work conditions like? Private office, shared office, team room? Coding alone or as a pair? How has your working condition changed the amount of time you spend coding each day? If you can work from home, does that help or hurt your productivity?
What development methodology do you use? Waterfall? Agile? Has changing from one methodology to another had an impact on your coding hours per day?
Most importantly: Are you happy with your productivity? If not, what single change would you make that would have the most impact on it?
I'm a corporate developer, the kind Joel Spolsky called "depressed" in a couple of the StackOverflow podcasts. Because my company is not a software company it has little business reason to implement many of the measures software experts recommend companies engage for developer productivity.
We don't get private offices and dual 30 inch monitors. Our source control system is Microsoft Visual Source Safe. Enough said. On the other hand, I get to do a lot of things that fill out my day and add some variety to my job. I get involved in business analysis, project management, development, production support, international implementations, training support, team planning, and process improvement.
I'd say I get 85% of my day to code, when I can focus and I have a major programming task. But more often I get about 50% of my day for coding. If production support (non coding-related) is heavy I may only get 15% of my day to code.
Most of the companies I've worked for were not actively engaged in evaluating agile processes or test-driven development, but they didn't do a good job of waterfall either; most of their developers worked like cut-and-paste cowboys with impugnity.
On occasion I do work from home and with kids, it's horrible. I'm more productive at work.
My productivity is good, but could be better if the interruption factor and cost of mental context switching was removed. Production support and project management overhead both create those types of interruptions. But both are necessary parts of the job, so I don't think I can get rid of them. What I would like to consider is a restructuring of the team so that people on projects could focus on projects while the others could block the interruptions by being dedicated to support. And then swapping when the project is over.
Unfortunately, no one wants to do support, so the other productivity improvement measure I'd wish for would be one of the following:
Better testing tools/methodologies to speed up unit testing
Better business analysis tools/skills to improve the quality of new development and limit its contributions to the production support load
Realistically, it probably averages out to 4 or 5 hours a day. Although its "lumpy" - there may be days where there could be 8 or 9 hours of it.
Of all the software developers I know, the ones that write production code (as opposed to research) 4 to 5 seems to be the max of actual coding. There is a lot of other stuff that goes on.
And to be honest there is a lot of procrastination. I find its a bit like writers block. sometimes its just hard to get started, but then a good 2 hour session is a LOT of work done. Its just all the preparation you have to go through, the experimentation to make sure you are taking the right approach. The endless amount of staring out the window and checking email etc...
I work a 37.5 hour week.
30 of those hours (80%) I am supposed to be billing our clients.
In reality I find that I use about 60% coding on actual client systems, 20% experimenting with new techniques and reading blogs, and 20% is wasted on office politics and "socializing".
Am I happy about it?
Do I wish that I could stare at the screen 30 hours a week coding on my given assignments?
Well. Since 20% of the time is used bettering myself at my craft, in the 60% that is effective coding I probably produce more than I would in 90% of my time if I didn't.
Then again, try to explain that fact to the higher ups ;)
Well, I generally come in at least
fifteen minutes late, ah, I use the
side door - that way Lumbergh can't
see me, heh heh - and, uh, after that
I just sorta space out for about an
hour.
...Yeah, I just stare at my desk; but
it looks like I'm working. I do that
for probably another hour after lunch,
too. I'd say in a given week I
probably only do about fifteen minutes
of real, actual, work.
For me, switching between projects is a big cause of procrastination. When I've just finished a project I tend to procrastinate on kicking off the next requirement assigned to me. My mind feels still like in coding mode, but I then have to estimate the expenses for creating a spec first. So I have to switch from coding to calling customers and the like, which feels uncomfortable.
What helps me most in being productive is to cut away any distraction in the first hours of the day and starting immediately with the day's most important task. I need to get into the flow as early as possible.
I recommend having a look at The Programmers’ Stone:
We know that stress impairs some cognitive functions. The loss of those functions can precisely explain why programming is hard, and show us many other opportunities to improve the ways we organize things. The consequences roll out to touch language, logic and cultural norms. Click here for the Introduction...
I spend about 40% of my day coding. 40% goes to non-coding activites (such as fighting with our sketchy build server or figuring out why NUnit failed with no error message again or trying to figure out why our code has stopped talking to the Oracle server downstaird... junk like that). The other 20% is usually spent procrastinating, or in meetings.
Am I happy with my productivity? Absolutely not. I work 7ish hours/day, and I spend about 2.5 of that coding. I would much rather be spending 5-6 hours of my day coding, with only an hour dedicated to all the other stuff (sadly, the one thing that would make that happen -- that the PM stop diddling with the build scripts every day -- isn't going to happen). Unfortunately, since I am a corporate developer, management doesn't see the time being frittered away. Because I get so much more done in that 40% of my day than most of the drones in the building get done in a week (including the PM), they think I'm productive.
#Bernard Dy: I have spent probably 30% of my career in corporate settings (am not at the moment). Usually its after some failed (or not failed, but fizzled) start up idea, or some kind of burnout/change. Its ok for a little bit, it is nice to meet people from totally different backgrounds (who would have thought that lawyers and actuaries could be so much fun to hang out with), but in the end, I just find it too hard to get up in the morning with motivation (or after a holiday dread going back) - probably for reasons like you define (just a lack of care). But its good experience and a source of ideas at the least. And you can meet brilliant people everywhere (its not just programmers who are smart - I always tried to seek out who the real brains were behind a business).
Interestingly the only time I have practiced strict agile/XP was in a corporate setting - in that case probably 7 hours a day was actual hands on code (in a pair) - I have never been so exhausted after a day of that. not sure if that is a good thing, perhaps I am just lazy.
To answer some of my own questions:
The current team I'm on does only does gross task estimation, so it's hard to track hours per days. I would say that, for my career, the time spent coding has been anywhere between 25% (mostly management) to 85%+ (working from home 4 days a week, get together for a meeting for half a day once a week). If I had to guess, though, the average is probably somewhere in the vicinity of 60%.
The biggest influence for me in time spent coding is the presence or absence of meetings. When I worked on agile projects with everybody in the same room, meetings tended to be ad-hoc and very short, so the time spent coding was very high. I also felt I spent less time -- sometimes a lot less time -- doing non-coding things when I was in a team room, because it's much easier to waste time, accidentally or otherwise, when nobody has a clear view of your monitor. :)
I do outsourcing and basically I code all day, I have two projects and I don't have much time to make anything else which it means that I can't take more work cause I could not finish anything, that is a good policy, you should take just as you can.
Remember also that that you should have spare time and very importantly is to rest enough because if you don't you won't be very productive. The key here is planning and discipline.
In my non-coding time I spent it with my wife, I also like to get out town and try not to think on my projects, the more I make this balance the more productive I am.
When I don't much work I like to read programing blogs and also I like to study programming.
And finally I would like to say that IMHO our carreer should not be seen as a work, instead you should see it like something fun.
I'm a software developer in an R&D department working 40 hour a week.
I spend like... 10% of my time actually coding.
In my non-coding hours I mostly test, evaluate, compare and put down results. I also spend a lot of time writing specification for the code I will write and researching for the code I will write, I participate in brainstorm meetings for the current projects, etc.
I could say that from my teammates (also software developers) I am the one who codes most at the moment; but in depends on which task we work at each time.
I would not quantify actually coding as working hard. If there is a good specification, a proper research and a good understanting of the project, coding is just a formality and goes on almost smoothly and quickly.
Here we have a sharred office, with two teams. We are mostly coding alone, rarely on a pair. My work changes a lot the amount of time I was coding; in the past I was spending most of my time coding, without a very good understanding of the coding. If I had a task I would immediately start coding, and re-coding each time I realised I did something wrong and so on. And it was very ineffective.
The development methodology is somewhere between prototyping and spiral now. It has clearly change the number of hour I code.
I am happy with my productivity, related to my deadlines and goals.

Resources