Are these still valid negatives against using Storyboards in developing iOS 6 applications? - xcode

Using storyboards in lieu of the traditional .xib strategy is something I'm still wrestling with as there is some hesitancy about adopting something that does so much under-the-covers without really understanding what it is doing, and what control I'm really losing.
The BNR iOS Programming Book highlights several "cons" to using Storyboards. I've listed them below, and my question is: Are these negatives still valid with iOS 6?
Storyboards are difficult to work with in a team
Storyboards screw up version control
Storyboards disrupt the flow of programming
Storyboards sacrifice flexibility and control for ease of use
Storyboards always create new view controller instances
I'm looking for answers from guys who are actually building, and preferably deployed real iOS applications and have struggled with the "storyboards vs .xib" thing themselves.
Thanks

I don't think iOS 6 fixes any of these situations. More to the point, xcode 4.5 doesn't fix them or even attempt to do so. The issues listed seem to reflect opinions or stylistic preferences, and perhaps some misinformation. These aren't the kind of thing that COULD be fixed in code.
I'm using storyboards for a substantial app and I find them to be a real productivity boon. I encourage you to try them to see if you don't agree.
A couple of comments on the issues list:
I'm not aware of any issues with teams and SBs, but if 2 were true
(which it isn't) that would explain this concern. I think this is a
misconception based on 2.
Not true. I use Git religiously, and commit frequently. No problemo. During commits, SBs are displayed in their source code form (XML). The diffs work perfectly and actually provide some insight into how SBs are implemented. This reduces the feeling of mysterious "under the covers" behaviors, which becomes a non-issue with familiarity.
Disagree. They don't disrupt the flow, they offer a different flow - which is where they get their power. Lots of programmers find value in the separation imposed by the MVC discipline. SBs introduce a separation between UI element placement and the supporting code. It's a natural split, and eliminates a ton of mindless code (which eliminates the opportunity for typos, and "de-clutters" the REAL code that remains).
Partly agree - they definitely improve ease of use. But I don't find any sacrifice at all. Even when using SBs you can always revert to hand coding any objects if you need to. There's no sacrifice of flexibility or control.
Not sure what this means, and why it might be a problem. Of course we create different VCs for different scenes - that's natural. But it's certainly possible to reuse VC classes in SBs. This item might be a misconception about how to set the class for a SB object. It's easy to forget to do this step, and it sometimes baffles beginners. But it's trivial to correct, and setting the class quickly becomes a habit.
For me the real concerns are:
Using SBs demands a lot of screen real estate for development. Using SBs can be frustrating on a small display.
Highly complex UIs with many many scenes should be split into multiple SBs. Multiple SBs are fully supported, but it's easy to fail to do it. (It's like refactoring a method that gets too big. Usually I notice that I need to refractor code AFTER something has already gotten messy.)
The convenience of SBs during layout and the elimination of so much of the boilerplate code that clutters up VC objects is a huge benefit. (Every line of code that I eliminate is a line I can't screw up, and a line that can't obscure the real code that remains.)
In short, I can't imagine going back to life without SBs. Yes, it is a change. But I haven't found any real serious downside. It's especially important to keep in mind that even when using SBs, all the non-SB coding techniques still work. Give SB's a try, and report your own experience. Good luck!

I generally agree with jbbenni. The only "valid" criticism I see in your points was that about "Storyboards always create a new instance." Basically, this meant that though you could wire up a button to push a view controller on the stack, you could not wire up a button to pop back up the stack without extra code. This has been resolved in Xcode 4.5 with "exit segues", which let you indicate that you want to pop back to a previous controller rather than creating a new instance.
The other limitation of storyboards many complained about was that you could not embed child view controllers in the storyboard itself. This has also been addressed in Xcode 4.5.
Storyboards are a significant step forward for iOS development. Complaints like "it makes merging hard" are unfounded; storyboards are no harder to merge than other code; you just have to take the time to actually read the diff instead of glossing over it as "not Obj-C; can't read."
I've used storyboards successfully in a team setting since their introduction. Don't let the uninformed scare you off. They're great.

Related

How to make your more experienced and authoritative teammates not to create 'fast temporary solutions'?

I'm currently working on a small short-lived project. But despite the size it's complicated enough with very unclear logic. That's why it was started by more experienced developers. They work on it from time to time because it's not their main project.
They made some code drafts with numerous places which 'would be rewritten in the nearest future'. After that they added several another 'temporary pieces'. And then again..
So, now the project is a mess of 'half-working' pieces of code with some hardcoded values, like file names or some constants which 'will be replaced latter with working parts'. The API is awful (nobody thinks about it actually).
And it's really, really hard to do development now (for me it's the main and only project). I caught myself thinking that I spent about an hour every day just to understand again all that tricky 'temporary' things and API weaknesses. And after that hour my brain melts.
I can't just say "guys, your code smells like a trash dump". What's the correct way?
It seems like the ultimate problem is they are writing code and not taking responsibility for its quality.
If this goes against the culture of the organization, it's a matter of making the situation known others. If the developers don't know, and have a modicum of empathy, I would take the "I don't quite understand this. Could you spend a few minutes walking me through it?" with them. They should soon realize what they are doing to you, and good programmers will adjust their practices. This may also have to be done via the management hierarchy-- "In order to progress on project X, I need Y hours of the programmers' time to work with their code effectively." It should either happen or bring up a "Why" conversation that should lead to changes.
If this is the culture of the organization, that's unfortunate. It may mean that the programmers producing the code don't care, and nor do any of the management. This is a bit of a political question-- who is most capable and/or interested in seeing this change? Find allies and proceed best you can. A candid conversation with the developers may be the best choice, as they are the people capable of change and no one else is going to induce them to-- so just ask outright.
Hope this helps.
Push for implementation of a formal code review process. Then they won't dare write code like that in the first place. I recommend using a collaborative tool like SmartBear's Code Collaborator or the free ReviewBoard.
Just like people drive slower when they know the cops are watching, they write better code if they know someone is going to be looking at it.
Are these 'other developers' no longer working on the project? And if so, are you the main person working on it? If the answer to both of these is "yes" the the project is yours. Start to make incremental improvements to make it more readable.
You might also like to show the code to a more experienced developer who didn't work on the project. See if they agree that the code is hard to maintain. Suggest to your boss that you set some time aside to 'finish off' the temporary work and bring it to a point where it is maintainable.
Implementing a formal code review process is also a good idea if you want to prevent this happening again.
And remember it may not have been the other developers fault. Sometimes people are told to spend the minimum amount of time, or are told that the code will be thrown away.

The future of Naked Objects pattern (and UI auto-generation) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I ask about the pattern, not framework. This is kind of follow-up to a question on UI auto-generation.
Do you believe in the concept of UI auto-generation from metadata?
What kind of problems can be approached this way?
The question arose when I've created a small library to support my student projects, which generates interactive CLI in runtime based on object's metadata. And I think CLI it generates is quite decent.
On the other extreme is the Naked Objects Framework, which is rather universal, but UI it generates is horrible, IMO.
It's clear, every problem is specific and needs specific UI, but maybe there are several classes of problems where auto-generation is acceptable?
Yes, I believe the concept of metadata-based auto-generated applications is very sound - mainly because it drastically reduces development time and improves code quality by reducing the massive redundancy you have in most applications where each domain data field is represented in the database, in the model, in the UI, and often also several times in various mapping layers.
I think the future is auto-generated apps that can be modified wherever necessary. Currently, this is AFAIK not really possible; for example, Rails only allows you to fully customize the UI when you use static scaffolding, which basically means code generation, i.e. many further changes in the domain model are then not automatically represented in the UI because the duplication has happened when the code was generated.
I believe the first framework that manages to combine complete auto-generation with complete modifiability afterwards will become the de-facto development standard to a previously unknown degree. Though most likely we'll get there in small steps so that there will not be such a single dominating framework.
Take a look at JMatter, which is a rather better-looking implementation of Naked Objects.
http://www.jmatter.org
There is also Chris Muller's work on MAUI, and Lukas Renggli's work on Magritte (both Squeak /Smalltalk)
We have lots of generated UI in the configuration part of our apps. All those lists that are around forever and changed once in a blue moon by a system administrator.
I find that most applications with a database back-end tend to have a bad design from an OO and NO perspective, as already shown in the NO book by Pawson and Matthews.
Re: qn #1 ... Do you believe in the concept of UI auto-generation from metadata? ... I'm definitely going to answer 'yes' to your first question, being one of the committers to the Naked Objects (Java) framework and writing a book on DDD + NO.
The question mentions metadata. I think this is key to NO being able to succeed. In the latest version (which will be going beta in Feb) the metamodel has been opened up so that it is very extensible, either so you can write your domain model following your own programming conventions/annotations, or, potentially so that more sophisticated viewers can look for their own metadata to provide more sophisticated views. (For example, consider that if an object implemented a Location interface then it is displayed in a google maps).
Regarding qn #2 ... what kind of problems can be approached this way ... we've always said that NO is more suitable for "sovereign applications" (transactional, operational systems ones used internally within an organization) to "transient applications" (like an airport kiosk, say). An NO GUI does require that the user is familiar with the domain, otherwise they won't know what they are looking at.
What's missing still is sophisticated viewers, of course. You are right about the NO GUI, it is definitely low fidelity (though the .NET version is a big improvement, see recent infoq.com article). On the Java side there is a sister project called scimpi.org that has a lot of promise though... it provides a basic web GUI for free but lets you hand-craft web pages as necessary and incrementally. I'm also working on an Eclipse RCP GUI that'll work similarly.
The other thing to add to this though is that the NO approach has value (I believe) even if you choose to write a custom GUI and/or presentation layer. That is, you can use it as a design tool for building a very solid pojo domain layer, and then skin it as you will. Trouble is that NO was never originally sold in those terms, so most will see the NO pattern as an all-or-nothing affair.
Dan
One way to look at this is to consider the difference between the user interface you get from something like Toad or MySQL Browser, where the user interface is directly constructed from the tables and their associated meta data, and the user interface that a skilled designer would develop for the actual application. IF there not too disimilar then it should be fairly low hanging fruit for an auto-generation framework.
As you say there are classes of problems which will work quite well with this kind of auto generation and some which wouldn't. To my mind the key things are how well the implementation model (or portion thereof) which you are exposing in the user interface maps to the conceptual model of the user. Secondly how well can the behavior of the application can be expressed through a limited set of user interface components (assuming this is a general purpose UI generation framework).
This article "Universal Model of a User Interface" may be of interest .
I think the idea of automatically generated UIs has a lot of potential especially for your average form-and-table layout database user interface. However, even there a human needs to be in the loop, having the ability to override the output without it being overwritten with the next regeneration.
I suspect automatically generated UIs would be more successful today if interaction designers were more involved in developing the generation algorithms. My impression is that historically the creators of these systems don’t know what kinds of UI-related metadata to include or how to use it. Specifying labels, value ranges, formats, and orders for fields is a start, but more high level information is needed. Sufficient modeling of the tasks and user roles in particular tends to be lacking, along with some basic style-guide-level principles for UI.
Oracle’s Designer 2000, for example, was on the right track in including not only the entities and relations in the model, but also the tasks in the form of a functional hierarchy. Then they blew it by misapplying this metadata (e.g., assuming that depth is always preferred to breadth) and including fundamental flaws when generating the UI (e.g., only one primary window can be opened at a time). The result was IUs that were not even consistent with Oracle’s own Applications User Interface Standards.
Getting a basic UI up quickly that lets the customer try out the system and create test data must be of value. Naked Objects frameworks can help for the “boot strapping” even if you have to have replace it with “hand crafted” UI before you ship.
In most system I have worked on, there have been lots of simple housekeeping tables. All these tables need a UI to edit and view them etc. There is also great value in these simple editors being consistent. Here a naked Objects framework could save a lot of time, even if the main “day to day” UI is “hand crafted”
I have seen a couple of failed projects (cases where I was brought in as a rather expensive consultant to help architect the replacement) which used the "naked objects" approach (not the framework, AFAIK) - all with simply atrocious UIs, and worked replacing a lot of the UI on one project which, in its original incarnation, had a similar approach (the entire application was a tree of objects accessed through context menus and property sheets - this was NetBeans 2.0 circa 1998 - IDE as a giant hierarchical JavaBean).
The bottom line is, your users don't care about your architecture, they care about getting what they need to do done in the most comprehensible-to-mere-mortals set of interactions you can come up with. If that happens to align with your architecture, you are having a lucky day - but it really is serendipity. Trying to force users to care (or even know) about your architecture is a recipe for software nobody wants to use.
Code generally needs to be designed around two not-always-compatible goals:
Maintainability - people who didn't write the code can understand the code
Stability and performance - i.e. the activities the code asks the computer to physically do are both possible, and can be completed within a reasonable time frame
The abstractions and code structures that it makes sense to create to meet those two goals very, very rarely map exactly to user interface elements of any sort. Sometimes you can get away with it - barely - if your audience is technical. But even there, you are likely to please more users with at least a "presentation layer" adapter layer on top of the architecture that makes sense for programmers and machines.

Inheriting applications at a new job

When inheriting applications at a new job do you tend to stick to the original developers coding practices or do you start applying your own?
I work in a small shop with no guidelines and always wondered what the rule was here. Some applications are written very well but do not follow the standards I use (variable names etc...) and I do not want to "dirty" them up. I find my self taking a little extra time being consistent.
Others are written very poorly and it looks like the developer was changing his mind every keystroke...
ADDITIONAL THOUGHT
What about when I start my own projects? So now I have introduced a new coding standard to the mix:
The good code - but not my style
The bad code with bad practices and lack of standards
My own standards
If there are standards evident in the code, you should stick to them. If there aren't, start introducing your own.
If there are multiple developers who work on the same module, don't change the style.
If you will hand it off to another developer in the near future (this role is temporary), don't change the style.
If you are taking complete, exclusive, permanent ownership of the module, change it, but follow these rules:
One change at a time.
Fix all indentation to your liking at once, and commit that change.
Fix all brace placement to your liking at once, and commit that change.
Fix all other formatting to your liking at once, and commit that change.
Fix all naming to your liking at once, and commit that change.
Don't spend a lot of time on it.
If it takes more than an hour or two, then cut back.
Make the commit description clear.
So you can quickly ignore these changes when analyzing change history.
Use automated tools
to make sure the result is consistent and complete, so you don't have to mess with it again.
Run your tests
Just because your changes shouldn't affect behavior doesn't mean they won't. (Triple negative, ouch!)
Make sure everyone knows what you're doing
Someone might have a change hanging around that they want to commit now, and it'll be painful to merge with your changes. Also, you don't want anyone to get surprised and go tell your boss before you do.
Don't do it again
This is a one-time thing.
Publish a style guide that follows best practices, and build consensus around following it. Refactor old code as you need to maintain it.
I'm in the same boat as you. Lone developer who inherited some apps from the last guy. I
I've been sticking to what appear to be his standards for existing projects for consistency, and using my own preferences for new stuff.
I've noticed that most people think whoever came before them had no idea how to write code. Then whoever comes after them thinks the same thing. Some things are common sense, but most things are just personal preference.
For major problems, i.e. using comments v.s. not using comments, updating the code will probably make it easier to work with, and easier for anyone else to work with. Even then, your time is probably best-spent updating the code as you come across it, instead of embarking on a huge project to refactor everything (introducing new problems in the process).
For things like indentation, line spacing, variable names, one-line ifs v.s. multi-line ifs, the reality is that your coding style is likely just as bad as you think theirs is.
I think it depends on what you mean by "coding practices". If you mean things like code formatting and naming conventions and things that I would personally consider "cosmetic", then stick with whats already there. If you mean things like coding best prcatices and writing code correctly in the first place, then go back and fix the problems if possible, but at the very least make your new code follow best practices.
Given that most of the applications I've inherited have been hacked together by "cowboy coders" who didn't apply even the most basic of coding practices, my opinion is a little biased.
I say introduce coding standards if there are none or the ones that exist are blatantly wrong and/or stupid (e.g. "All variables must be no more than 4 characters in length", "Every database column is varchar(255) null", etc.). Obviously if you have a team then you'll need to come to an agreement as to what practices to implement, but if you're a solo dev then you have free reign and IMO you should introduce order to the chaos.
If the code works, and seems to have had a clean format. Don't waste time changing the style.
If the code is badly written. By all means change it when you have some down time, or the next time you work on the project.
For new projects do them your way, since there is no standard. As with the other well written programs yours should be easy enough to maintain.
composition is often preferred over inheritance
:-P
If it's just you, go for it. If it's a team, especially if any of the original developers are still around (or likely to be called in for consulting), keep with the existing style and practices as much as you can. Don't follow them down a rat hole - if you think they're doing something stupid, change it, but if it's just a stylistic thing, keep to their style as much as you can.
On several jobs I've been on, we had no rule on coding style other than "if you're making changes to an existing file/class, use the existing style, even for new code."
I follow company standards if there are any.
If there aren't any and the changes are small, I adopt to the used style of coding.
If there are larger changes to be made and I don't like the coder's style, I will use my own.
And if the existing code is bad I will change that too.
Will you ever have a better opportunity to update existing code with a standard style? Probably not. When you are new to the code you are going to have the best chance of taking some extra time to make non-new-feature and non-bug-fix changes. The lack of standards may be discouraging but you are unlikely to have a better chance to standardize than when you first inherit the code.
It sounds like we're talking about a situation with no official style guides / best practices. In that case, as Sean said, I'd take the lead on establishing some. But... if at all possible, pick an existing, widely-used standard. It's more likely to be accepted, all the arguments are done with, and the odds of out-of-the-box tool support (editors, code review tools, etc.) greatly increase.
Getting others to adopt it will often work best from the bottom up -- write new code to the new standards, mention to others that you've done so, ask for feedback. Much easier than trying to get approval and buy-in in advance.
Within the existing, ugly project, avoid wholesale changes to existing modules. For one thing, diffs and version control will get quite confusing if a file is suddenly reindented.
If the chunk you're working on is so bad as to be unreadable, I'd do an initial checkin just to reformat it; follow that up with actual code changes.
I would apply the same refactoring standards to the code as I would if it DID match my style standards. That is, I'd ignore the style and just go on about my business.
If it's not terribly difficult to follow the style that is in the code - with regards to naming conventions, I'd go ahead and use those for new code.
However, I wouldn't bother trying to follow stuff like 'tabs should not be used', 'every line should be indented 2 spaces', etc. There are plenty of editors out there where you can 'pretty' the code whenever you need it these days.
G-Man
I think it depends highly on the specific case.
If you are a consultant on a project for a short time you should stick to the way thing are.
If you are on for a long time. Try to refactor bad code into your own scheme.
If you are on for a short time but you are working on an isolated module, then use your own scheme.
Short answer is, "It depends." Here are a few factors that I'd consider important in determining whether to keep the old style or not:
1) Scope of changes. If it is close to a total re-write of the application, then it may make more sense to put in a new standard if you have one that you feel works well for you.
2) Likelihood of future changes. Will this be changed over and over again? If so, then taking some time early on may well be worth it in the end. This does require a bit of judgement and predicting the future, but it may be easy in some cases to see that there will be changes over and over again for some systems that are fairly complex.
3) How much of the code is a customization on a 3rd party codebase, e.g. a company's specific customizations of Oracle products for their business processes, compared to a completely home grown application. The impact here is that when new versions are relased and an upgrade is requested, how much pain may there be on what breaks since it was customized so much.
When starting your own projects, put in the best standard that you know.
If I inherit code that has obviously never been refactored, I would take that as an opportunity to impose some of my own structure.
If people expect me to make time and cost estimates for adding functionality to the code, I'll need to be intimately familiar it, and make sure it lives up to my standards.
If the code is already well-written, that would be a blessing that I would not mess with. But in my experience, this hasn't happened very often.

Should I use Cocoa bindings for my latest project?

I'm starting a project which I think would benefit from bindings (I've got a source list table, several browser views, etc), but I think it would also be quite doable, and perhaps more understandable, without them. From my limited experience I've found bindings to be difficult to troubleshoot and very "magic" (e.g. it's difficult to insert logging anywhere to figure out where stuff is breaking, everything either works or it doesn't).
Is this just my inexperience talking (in which case I could sit down and spend some time just working on my understanding of bindings and expect things to start becoming clearer/easier) or would I be better off just writing all the glue code myself in a manner which I was sure I could understand and troubleshoot.
Use Bindings.
Note that you must follow the MVC pattern to get the most from bindings. This is easier than it seems, as Cocoa does almost everything for you nowadays:
View: NSView and subclasses (of course), NSCell and subclasses, NSWindow and subclasses
Controller: NSController and subclasses (especially NSArrayController)
Model: Core Data
If you're not going to use Core Data, then you get to roll your own model objects, but this is easy. Most of these objects' methods will be simple accessors, which you can just #synthesize if you're targeting Leopard.
You usually can't get away with not writing any code, but Bindings can enable you to write very little code.
Recommended reading:
Key-Value Coding (KVC) Programming Guide
Key-Value Observing (KVO) Programming Guide
Model Object Implementation Guide
KVC Accessor Methods (part of the aforementioned KVC Programming Guide) and my complete list of KVC-compliant accessor selector formats
Bindings can seem magical in nature. To understand the magic behind bindings, I think one must understand KVC/KVO thoroughly. I really do mean thoroughly.
However, in my case (new to Obj-C -- 9 months), once I got KVC/KVO bindings was a thrill. It has significantly reduced my glue code and made my life significantly easier. Debugging bindings became a case of making sure my key-value changes were observable. I find that I am able to spend more time writing what my app is supposed to do rather than making sure the view reflects the data.
I do agree though that bindings is highly intimidating at first.
My general approach is to start out as much as possible using bindings and see how things go. However, if a particular interface element start to become problematic using bindings, or more effort than it's worth, then I don't hesitate to fall back to using more traditional methods (e.g. data sources, actions) when it makes sense. I've found these things can be pretty hard to predict ahead of time, but I think favoring bindings is better in the long run, as long as you don't get too dogmatic about sticking with them in situations when they don't provide any benefit.
After a while of working with Bindings I've found that it's not magic at all, thought it is sufficiently advanced technology. Debugging a bound interface takes different techniques than a glued interface, but once you have those techniques, the advantages in terms of reuse, maintainability and consistency are IMO significant.
It seems like I use bindings, KVO and data source methods all about equally in my applications. It really depends on the context. For example, in one of my projects I use bindings just about everywhere except the main window's outline view, which is complex enough that I wouldn't want to even try to fit it into an NSTreeController. At the same time I also use KVO to reload UI objects and track dependancies in my model objects.
The important thing to keep in mind when learning advanced Cocoa topics like Bindings or Core Data is that you must understand all the technologies behind them; everything from data source protocols, notifications KVO, and so one. Once you've had enough experience working with them to know how the "magic" works, you'll be able to integrate the higher level stuff into your application with ease.
In your particular case, you'll have to decide if it's worth the extra time to learn bindings on top of developing your application. If possible, it might benefit you to develop a simplified prototype of your application using bindings, so you know how to best fit the pieces together when you start the actual project.
My opinion is that yes, you should adopt bindings; the technology is well-understood and stable now, and it's worth doing for the amount of code you no longer need to write. When I first switched to bindings, I had quite a bit of trouble with getting the lifetime of observing and observed objects to match up, and with UI breakages because it was observing a valid object, but the incorrect one. Once you've seen those problems a couple of times, knowing how to avoid them and how to spot them if they do appear becomes straightforward. Ish. I still wish for "this event here caused this update here" traces in the debugger, but I'm still glad I made the move.
For the curious, I did end up using bindings and after a couple of days they suddenly just started "making sense". So I would definitely recommend just going ahead and taking the time to learn them.
I also found the advice of Brian Webster quite helpful, as I did indeed end up doing a handful of things the old fashioned way either because bindings couldn't do what I wanted or because it would have been prohibitively complicated to do what I needed using bindings.

What can you do to a legacy codebase that will have the greatest impact on improving the quality?

As you work in a legacy codebase what will have the greatest impact over time that will improve the quality of the codebase?
Remove unused code
Remove duplicated code
Add unit tests to improve test coverage where coverage is low
Create consistent formatting across files
Update 3rd party software
Reduce warnings generated by static analysis tools (i.e.Findbugs)
The codebase has been written by many developers with varying levels of expertise over many years, with a lot of areas untested and some untestable without spending a significant time on writing tests.
Read Michael Feather's book "Working effectively with Legacy Code"
This is a GREAT book.
If you don't like that answer, then the best advice I can give would be:
First, stop making new legacy code[1]
[1]: Legacy code = code without unit tests and therefore an unknown
Changing legacy code without an automated test suite in place is dangerous and irresponsible. Without good unit test coverage, you can't possibly know what affect those changes will have. Feathers recommends a "stranglehold" approach where you isolate areas of code you need to change, write some basic tests to verify basic assumptions, make small changes backed by unit tests, and work out from there.
NOTE: I'm not saying you need to stop everything and spend weeks writing tests for everything. Quite the contrary, just test around the areas you need to test and work out from there.
Jimmy Bogard and Ray Houston did an interesting screen cast on a subject very similar to this:
http://www.lostechies.com/blogs/jimmy_bogard/archive/2008/05/06/pablotv-eliminating-static-dependencies-screencast.aspx
I work with a legacy 1M LOC application written and modified by about 50 programmers.
* Remove unused code
Almost useless... just ignore it. You wont get a big Return On Investment (ROI) from that one.
* Remove duplicated code
Actually, when I fix something I always search for duplicate. If I found some I put a generic function or comment all code occurrence for duplication (sometime, the effort for putting a generic function doesn't worth it). The main idea, is that I hate doing the same action more than once. Another reason is because there's always someone (could be me) that forget to check for other occurrence...
* Add unit tests to improve test coverage where coverage is low
Automated unit tests is wonderful... but if you have a big backlog, the task itself is hard to promote unless you have stability issue. Go with the part you are working on and hope that in a few year you have decent coverage.
* Create consistent formatting across files
IMO the difference in formatting is part of the legacy. It give you an hint about who or when the code was written. This can gave you some clue about how to behave in that part of the code. Doing the job of reformatting, isn't fun and it doesn't give any value for your customer.
* Update 3rd party software
Do it only if there's new really nice feature's or the version you have is not supported by the new operating system.
* Reduce warnings generated by static analysis tools
It can worth it. Sometime warning can hide a potential bug.
I'd say 'remove duplicated code' pretty much means you have to pull code out and abstract it so it can be used in multiple places - this, in theory, makes bugs easier to fix because you only have to fix one piece of code, as opposed to many pieces of code, to fix a bug in it.
Add unit tests to improve test coverage. Having good test coverage will allow you to refactor and improve functionality without fear.
There is a good book on this written by the author of CPPUnit, Working Effectively with Legacy Code.
Adding tests to legacy code is certianly more challenging than creating them from scratch. The most useful concept I've taken away from the book is the notion of "seams", which Feathers defines as
"a place where you can alter behavior in your program without editing in that place."
Sometimes its worth refactoring to create seams that will make future testing easier (or possible in the first place.) The google testing blog has several interesting posts on the subject, mostly revolving around the process of Dependency Injection.
I can relate to this question as I currently have in my lap one of 'those' old school codebase. Its not really legacy but its certainly not followed the trend of the years.
I'll tell you the things I would love to fix in it as they bug me every day:
Document the input and output variables
Refactor the variable names so they actually mean something other and some hungarian notation prefix followed by an acronym of three letters with some obscure meaning. CammelCase is the way to go.
I'm scared to death of changing any code as it will affect hundreds of clients that use the software and someone WILL notice even the most obscure side effect. Any repeatable regression tests would be a blessing since there are zero now.
The rest is really peanuts. These are the main problems with a legacy codebase, they really eat up tons of time.
I'd say it largely depends on what you want to do with the legacy code...
If it will indefinitely remain in maintenance mode and it's working fine, doing nothing at all is your best bet. "If it ain't broke, don't fix it."
If it's not working fine, removing the unused code and refactoring the duplicate code will make debugging a lot easier. However, I would only make these changes on the erring code.
If you plan on version 2.0, add unit tests and clean up the code you will bring forward
Good documentation. As someone who has to maintain and extend legacy code, that is the number one problem. It's difficult, if not downright dangerous to change code you don't understand. Even if you're lucky enough to be handed documented code, how sure are you that the documentation is right? That it covers all of the implicit knowledge of the original author? That it speaks to all of the "tricks" and edge cases?
Good documentation is what allows those other than the original author to understand, fix, and extend even bad code. I'll take hacked yet well-documented code that I can understand over perfect yet inscrutable code any day of the week.
The single biggest thing that I've done to the legacy code that I have to work with is to build a real API around it. It's a 1970's style COBOL API that I've built a .NET object model around, so that all the unsafe code is in one place, all of the translation between the API's native data types and .NET data types is in one place, the primary methods return and accept DataSets, and so on.
This was immensely difficult to do right, and there are still some defects in it that I know about. It's not terrifically efficient either, with all the marshalling that goes on. But on the other hand, I can build a DataGridView that round-trips data to a 15-year-old application which persists its data in Btrieve (!) in about half an hour, and it works. When customers come to me with projects, my estimates are in days and weeks rather than months and years.
As a parallel to what Josh Segall said, I would say comment the hell out of it. I've worked on several very large legacy systems that got dumped in my lap, and I found the biggest problem was keeping track of what I already learned about a particular section of code. Once I started placing notes as I go, including "To Do" notes, I stopped re-figuring out what I already figured out. Then I could focus on how those code segments flow and interact.
I would say just leave it alone for the most part. If it's not broken then don't fix it. If it is broken then go ahead and fix and improve the portion of the code that is broken and its immediately surrounding code. You can use the pain of the bug or sorely missing feature to justify the effort and expense of improving that part.
I would not recommend any wholesale kind of rewrite, refactor, reformat, or putting in of unit tests that is not guided by actual business or end-user need.
If you do get the opportunity to fix something, then do it right (the chance of doing it right the first time might have already passed, but since you are touching that part again might as well do it right time around) and this includes all the items you mentioned.
So in summary, there's no single or just a few things that you should do. You should do it all but in small portions and in an opportunistic manner.
Late to the party, but the following may be worth doing where a function/method is used or referenced often:
Local variables often tend to be poorly named in legacy code (often owing to their scope expanding when a method is modified, and not being updated to reflect this). Renaming these in line with their actual purpose can help clarify legacy code.
Even just laying out the method slightly differently can work wonders - for instance, putting all the clauses of an if on one line.
There might be stale/confusing code comments there already. Remove them if they're not needed, or amend them if you absolutely have to. (Of course, I'm not advocating removal of useful comments, just those that are a hindrance.)
These might not have the massive headline impact you're looking for, but they are low risk, particularly if the code can't be unit tested.

Resources