User Interface Design Package - user-interface

I am working on designing a user interface and want to convey my design to my peers. Typically, I'd use a mock-up of the UI and UML, but given the complexity, size, and multiple asynchronous interactions I'm not sure that this makes the most sense. It doesn't seem to allow to efficiently describe the process.
Does anyone have experience in designing large User Interface's? How would a company that designs UI's go about modeling their process/ design? This seems to be a question for the 'front-end' engineers.

well let's go for an answer even if your question is very large. I'll give you my experience and hope it will help you to decide.
To my mind, using UML in a project is very useful if you use it right, it's often clear, quite easy to understand. On the other side, when working on huge project, it's sometimes costly because it takes time to create your diagrams and nowadays, you have to be as fast as possible !
These last months I have worked on a project with a GUI and to well define it, I have first draw some screen on paper and quickly ask for some drawbacks near the coffee machine.
I have created some activity diagrams and then, created a mock-up.
How to choose now ?
First of all, if your GUI should have a beautiful design, I don't think you can avoid a mock-up, but I'm don't know if it's your goal.
My main tip would be, try to simplify your complexity in your program. If you have multiple asynchronous that could arise everytime/everywhere, you'll encounter a lot a problem.
In my project, let's say my tablet could be preempted by an external computer whenever/wherever. I just have created a class beforehand to manage this asynchronous call but my sub-activity-diagrams stayed intact.
Draw specific diagram for specific functionalities and leave useless diagrams which are not relevant/too simple.
Hard to explain you more than this without giving you any example. Hope it will help a bit.

Related

Why is UI programming so time consuming, and what can you do to mitigate this? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
In my experience, UI programming is very time consuming, expensive (designers, graphics, etc) and error prone - and by definition UI bugs or glitches are very visible embarasing.
What do you do to mitigate this problem?
Do you know of a solution that can automatically convert an API to a user interface (preferably a Web user interface?).
Probably something like a JMX console
with good defaults
can be tweaked with css
where fields can be configured to be radio button or drop down list, text field or text area, etc
localizable
etc
Developing UI is time consuming and error-prone because it involves design. Not just visual or sound design, but more importantly interaction design. A good API is always interaction model neutral, meaning it puts minimal constraints on actual workflow, localisation and info representation. The main driver behind this is encapsulation and code re-use.
As a result it is impossible to extract enough information from API alone to construct a good user interface tailored to a specific case of API use.
However there are UI generators that normally produce CRUD screens based on a given API. Needless to say that such generated UI's are not very well-suited for frequent users with demands for higher UI efficiency, nor are they particularly easy to learn in case of a larger system since they don't really communicate system image or interaction sequence well.
It takes a lot of effort to create a good UI because it needs to be designed according to specific user needs and not because of some mundane API-UI conversion task that can be fully automated.
To speed the process of building UI up and mitigate risks it is possible to suggest either involving UI professionals or learning more about the job yourself. Unfortunatelly, there is no shortcut or magic wand, so to speak that will produce a quality UI based entirely and only on an API without lots of additional info and analysis.
Please also see an excellent question: "Why is good UI design so hard for some developers?" that has some very insightful and valuable answers, specifically:
Shameless plug for my own answer.
Great answer by Karl Fast.
I don't believe UI programming is more time consuming than any other sort of programming, nor is it more error prone. However, bugs in the UI are often more obvious. Spotting an error in a compiler is often much more tricky.
One clear difference between UI programming is that you have a person at the other end, instead of another program, which is very often the case when you're writing compilers, protocol parsers, debuggers, and other code which talks to other programs and computers. This means that the entity you're communicating with is not well-specified and may behave very erratically.
EDIT: "unpredictable" is probably a more appropriate term. /Jesper
Your question of converting an API to a user interface just doesn't make sense to me. What are you talking about?
Looks like you are looking for the 'Naked Objects' Architectual pattern. There are various implementations available.
http://en.wikipedia.org/wiki/Naked_objects
I'm not providing a solution, but I'll attempt to answer the why.
So I don't speak for everyone, but for me at least, I believe one reason is because programmers tend to concentrate on functionality more so than usability and they tend not to be too artistic. I think they just tend to have a different type of creativity. I find that it takes me a long to time to create the right graphics, compared to how long it takes me to write the code (Though, for the most part, I haven't done any projects with too many graphical requirements).
Automatically generating user interfaces may be possible to some extent, in that it can generate controls for the required input and output of data. But UI design is much more involved than simply putting the required controls onto a screen. In order to create a usable, user friendly UI, knowledge from disciplines such as graphics design, ergonomics, psychology, etc. has to be combined. There is a reason that human-computer interaction is becoming a discipline of its own: its not trivial to create a decent UI.
So I don't think there's a real solution to your problem. UI design is a complex task that simply takes time to do properly. The only area where it is relatively easy to win some time is with the tooling: if you have powerful tools to implement the design of the user interface, you don't have to hand-code every pixel of the UI yourself.
You are absolutely correct when you say that UI is time consuming, costly and error prone!
A great compromise I have found is as follows...
I realized that a lot of data (if not most) can be presented using a simple table (such as a JTable), rather than continuously try to create custom panels and fancy GUI's. It doesn't seem obvious at first, but it's quite decent, usable and visually appealing.
Why is it so fast? Because I was able to create a reusable framework which can accept a collection of concrete models and with little to no effort can render all these models within the table. So much code-reuse, its unbelievable.
By adding a toolbar above the window, my framework can add to, remove from or edit entries in the table. Using the full power of JTables, I can hide (by filtering) and sort as needed by extending various classes (but only if/when this is required).
I find myself reusing a heck of a lot of code every time I want to display and manage new models. I make extensive use of icons (per column, rows or cells, etc) to beautify the screens. I use large icons as a window header to make each screen 'appear' different and appealing and it always looks like new and different screens, but its always the same code behind them.
A lot of work and effort was required at first to do the framework, but now its paying off big time.
I can write the GUI for an entirely new application with as many as 30 to 50 different models, consisting of as many screens in a fraction of the time it would take me using the 'custom UI method'.
I would recommend you evaluate and explore this approach!
if you already know or could learn to use Ruby on Rails, ActiveScaffold is excellent for this.
One reason is that we don't have a well-developed pattern for UTDD - User Test Driven Development. Nor have I seen many good examples of mapping User Stories to Unit Tests. Why, for example, do so few tutorials discuss User Stories?
ASP.NET Dynamic Data is something that you should investigate. It meets most, if not all your requirements
It's hard because most users/customers are dumb and can't think straight! :)
It's time consuming because UI devs/designers are so obsessive-compulsive! :)

What's the good time balance between designing an application and coding it? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
The question might seem trivial, but it's an actual problem: when you're working on a project, do you do any kind of architecture design before actually starting coding? Do you spend much time working together with a customer to get a detailed specs/usecases/mockups?
During coding, do you alter those architectural decisions made before? Do you go back to the customer with new set of specs/usecases/mockups?
I'm wondering, what's a good balance between all those non-coding actions and coding itself, from your experience?
Update:
Ok, so from the anwsers so far it seems like there are 2 approaches:
design early, then sit and code to avoid late fixes
minimize the design alone part, instead do iterative development (agile methodologies seem to prefer it that way).
I guess which way to go depends on the project, team and customer... am I right?
That which minimises the total time spent ;-)
It heavily depends on the kind of project, but generally speaking it's better to "waste" time over-designing and specifying requisites than finding out later that something was wrong and come the whole way back to fix it.
I read something about quantitative measurements of the impact of poor design decisions in "The Mythical Man-Month" or maybe in a book called something like "Software Requirements Pro Practices" from Microsoft Press, I think the time wasted in a late fix (near product delivery) was about 10x than in early stages.
If you do agile, design and coding are the same thing. In my experience it is good to pair program during the very first stage of the project...
Have a look at scrum, agile and waterfall. This is related to project management not programming per se.
Architecture also becomes easier once you have built enough applications within a domain or a platform. In PHP, if you use Joomla, Symfony or codeigniter then your scaffolding and architecture is already in place. Same for ASP.NET MVC.
My personal experience tells me that you should consider different factors. There's no silver bullet. My personal list follows, grown mostly by experience.
If you are developing something that is well known in details, the development team is sparse and with difficulty to communicate efficiently all together, the team has strong or huge dependencies towards the work of other teams, and what you are developing has a fundamental long term importance that will be difficult to change in the future (eg. file formats), go for a very long design phase, akin to a waterfall model. Also, you should spend a lot of design if you plan to develop a rather complex application, and you have to deeply consider all the possible interactions between features before coding. Coding takes very little time compared to design. Also, you should consider this if it somehow important to keep efficient record of how the application behaves from a very high level point of view, and if your team tends to be highly unstable, so that your knowledge stays on paper, rather than in people's brain.
if you have to implement something brand new and to do research on, you want feature as soon as possible, growing the application from fast feedback, you have a pool of geeks that work in the same room, are very committed to your cause, love programming and they are passionate to share and build together, go for agile methods.
if you are in between to the previous two cases, go for an iterative approach. I normally choose a 3 months schedule. When I code alone, I work agile-like, mostly because I have to cope with frequent disruption, so I add feature by feature. However, I release iterative, namely I don't plan to do an official, stable release before the third iteration. I want space to learn the field, do mistakes, and correct them before committing to maintain some stupid choice.
if you code in academia, you are screwed, because you have some of the issues in 1 without the manpower to accommodate them, and some of the issues in 2 without the easy communication required by agile methods.
roughly 50/50. whenever ive analysed my project schedules, it turns out about 50% of the time goes into design, project management, quality control, and auxiliry tasks. the remaining 50% is coding. if i dont see that 50/50 ratio, i worry.
mind you, this is using traditional waterfall model (which is more suited to custom-app development). agile methods are better for shrink-wrapped software in my opinion.
I would say it's roughly 50/50, no matter the "methodology" or project type. It only varies in how those 50% design are distributed. And that may depend on the project, but most of all it depends on the people who do the work, and how they are "wired". It's more a matter of psychology than methodology.
Some people (I'd say the more cautious characters) need a more detailed mental map before they start coding. If they don't have that map out of prior experience, they will need more "investigation" time up front.
Others yet like to just "jump in" into coding with only a rough mental map, and work out the details as they go.
Somewhere in between is to do the elaboration via spikes and prototypes, and develop the "big picture" on top of that running code. For me personally this tends to yield the best results, and the least waste. (After all, prototyping is, in a way, a test-first approach applied on solution ideas. You get an idea, test it out in a spike or prototype, then implement/integrate it with the main code base.)
My advice is: Find out the style that feels best to you personally, and stick to it. That's going to be pretty sure the style you are going to be most effective with.
Those two things are tightly coupled. Well at first stage, you are definitely will spend some time to make design decision. Then you will have to start coding and almost in all cases you will came up with some improvement decision for your previous design.
After all it will depend on delivery date and how much time you have at all and then to decide accordingly how you going to balance it. In general you make a startup design and then during coding you will update and change it. Also is a good practice to deeply involve your customer in design decision during development stage to force him be aware of it and how much of your time you will spend on each change.
The longer the period between when you write your specification and the time you start coding will increase the chance that requirements will change. So, to answer your question, as soon as possible....
If your suffering from too much requirement creep then I would suggest implementing smaller iterations of releases (if possible) and then creating new requirements/specifcation documents for each of these samller phases.
If you can't do this.... make sure you have a good change management process sin place.
My google-fu is failing drastically, but I recently read something to the effect of:
"Spend 6 months coding, 6 months designing and 6 months testing. The good news is, they're all the same 6 months."
It's important to design enough to have a map of what you are trying to code, and how it relates to the rest of the system. You can't just code most large projects - they're too big, and usually involve multiple components. I've done that when I was young, and you end up with a big ball of mud, or stay up all night for a week refactoring it.
What I tend to do now is design down to the package level, and assign roles to components. On large systems getting to the component selection stage can take several months, and involve some trail and prototyping coding.
Then the APIs and implementations of each package are evolved, based on what messages the functionality require, and how the clients of the packages evolve to cause the emergence of further requirements or constraints. I usually evolve an API by designing a pure interface (by writing the code for it) with unit tests for each known use case, then implement it. So there is some writing of code involved in designing - the best representation of the API is usually the code and inline documentation, and it's easiest to confirm that the client can perform the actions required to satisfy a use case ( and the code to do so is not excessively complex ) by writing code which exercises the API in that way, and that code trivially becomes a unit test for the implementation of the API when it arrives. But the code written during 'designing' isn't the code which supplies the implementation of the API. For APIs with low coupling ( so can be changed without breaking too many clients ), I'll switch between designing and implementing modes rapidly; for ones with higher coupling, I'll typically publish the API and use-case examples for peer review before committing too implementing them.
As aleemb said, this really is a project management question. I suggest you read up on several methodologies, find the useful and not-so-useful parts of each, and evaluate your own circumstances (team size/experience, customer engagement and commitment levels, what's done in your organization, schedule/budget, etc.) and come up with the best schedule you can. It really all just depends on your specific circumstances.
Think about how many people are going to be involved in writing the software.
If it's just a one-developer job, maybe take a smaller percentage for design. If you're going to have 30 people working on it, you probably want a lot bigger fraction for the design.
Getting teams of developers to write software is much like partitioning software up across multiple CPU's - you are going to get the best return for added CPU (read 'developer') when you can minimize the necessary communication between them. You sure don't want to get 10's of k-loc into your project before the developers start discussing architectural issues.
Now you could probably also make the case that, when you do a better job with the design phase, the coding will actually take less time and be less painful. Measure twice and cut once, and all that.
Also, you probably should think about the likelihood of the project being 'put on hold'; design artifacts have much better shelf life than immature code.
Depends on your chosen methodology.
Traditionally with Big Design Up Front or Waterfall you spend 90% of the time designing and 10 % of the time coding. You then spend another 90% of the time handling all the changes that the initial design missed. And another 90% of the time chasing bugs.
With modern Agile development you spend 10% of the time designing and 90% of the time coding. then another 90% handling all the changes that the customer representative forgot to mention and another 90% of the time chasing bugs.

Why is good UI design so hard for some Developers? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Some of us just have a hard time with the softer aspects of UI design (myself especially). Are "back-end coders" doomed to only design business logic and data layers? Is there something we can do to retrain our brain to be more effective at designing pleasing and useful presentation layers?
Colleagues have recommended a few books me including The Design of Sites, Don't make me think and Why Software sucks , but I am wondering what others have done to remove their deficiencies in this area?
Let me say it directly:
Improving on this does not begin with guidelines. It begins with reframing how you think about software.
Most hardcore developers have practically zero empathy with users of their software. They have no clue how users think, how users build models of software they use and how they use a computer in general.
It is a typical problem when an expert collides with a laymen: How on earth could a normal person be so dumb not to understand what the expert understood 10 years ago?
One of the first facts to acknowledge that is unbelievably difficult to grasp for almost all experienced developers is this:
Normal people have a vastly different concept of software than you have. They have no clue whatsoever of programming. None. Zero. And they don't even care. They don't even think they have to care. If you force them to, they will delete your program.
Now that's unbelievably harsh for a developer. He is proud of the software he produces. He loves every single feature. He can tell you exactly how the code behind it works. Maybe he even invented an unbelievable clever algorithm that made it work 50% faster than before.
And the user doesn't care.
What an idiot.
Many developers can't stand working with normal users. They get depressed by their non-existing knowledge of technology. And that's why most developers shy away and think users must be idiots.
They are not.
If a software developer buys a car, he expects it to run smoothly. He usually does not care about tire pressures, the mechanical fine-tuning that was important to make it run that way. Here he is not the expert. And if he buys a car that does not have the fine-tuning, he gives it back and buys one that does what he wants.
Many software developers like movies. Well-done movies that spark their imagination. But they are not experts in producing movies, in producing visual effects or in writing good movie scripts. Most nerds are very, very, very bad at acting because it is all about displaying complex emotions and little about analytics. If a developer watches a bad film, he just notices that it is bad as a whole. Nerds have even built up IMDB to collect information about good and bad movies so they know which ones to watch and which to avoid. But they are not experts in creating movies. If a movie is bad, they'll not go to the movies (or not download it from BitTorrent ;)
So it boils down to: Shunning normal users as an expert is ignorance. Because in those areas (and there are so many) where they are not experts, they expect the experts of other areas to have already thought about normal people who use their products or services.
What can you do to remedy it? The more hardcore you are as a programmer, the less open you will be to normal user thinking. It will be alien and clueless to you. You will think: I can't imagine how people could ever use a computer with this lack of knowledge. But they can. For every UI element, think about: Is it necessary? Does it fit to the concept a user has of my tool? How can I make him understand? Please read up on usability for this, there are many good books. It's a whole area of science, too.
Ah and before you say it, yes, I'm an Apple fan ;)
UI design is hard
To the question:
why is UI design so hard for most developers?
Try asking the inverse question:
why is programming so hard for most UI designers?
Coding a UI and designing a UI require different skills and a different mindset. UI design is hard for most developers, not some developers, just as writing code is hard for most designers, not some designers.
Coding is hard. Design is hard too. Few people do both well. Good UI designers rarely write code. They may not even know how, yet they are still good designers. So why do good developers feel responsible for UI design?
Knowing more about UI design will make you a better developer, but that doesn't mean you should be responsible for UI design. The reverse is true for designers: knowing how to write code will make them better designers, but that doesn't mean they should be responsible for coding the UI.
How to get better at UI design
For developers wanting to get better at UI design I have 3 basic pieces of advice:
Recognize design as a separate skill. Coding and design are separate but related. UI design is not a subset of coding. It requires a different mindset, knowledge base, and skill group. There are people out there who focus on UI design.
Learn about design. At least a little bit. Try to learn a few of the design concepts and techniques from the long list below. If you are more ambitious, read some books, attend a conference, take a class, get a degree. There are lot of ways to learn about design. Joel Spolky's book on UI design is a good primer for developers, but there's a lot more to it and that's where designers come into the picture.
Work with designers. Good designers, if you can. People who do this work go by various titles. Today, the most common titles are User Experience Designer (UXD), Information Architect (IA), Interaction Designer(ID), and Usability Engineer. They think about design as much as you think about code. You can learn a lot from them, and they from you. Work with them however you can. Find people with these skills in your company. Maybe you need to hire someone. Or go to some conferences, attend webinars, and spend time in the UXD/IA/ID world.
Here are some specific things you can learn. Don't try to learn everything. If you knew everything below you could call yourself an interaction designer or an information architect. Start with things near the top of the list. Focus on specific concepts and skills. Then move down and branch out. If you really like this stuff, consider it as a career path. Many developers move into managements, but UX design is another option.
Learn fundamental design concepts. You should know about affordances, visibility, feedback, mappings, Fitt's law, poka-yokes, and more. I recommend reading The Design of Everyday Things (Don Norman) and Universal Principles of Design (Lidwell, Holden, & Butler)
Learn about user experience. This is becoming the umbrella term for the human-centered design of web sites, applications, and any other digital artifact. The classic primer here is The elements of User Experience (Jesse James Garrett). You can get an overview and the first few chapters from the author's site.
Learn to sketch designs. Sketching is fast way to explore design options and find the right design, whereas usability testing is about getting the design right. Paper prototyping is fast, cheap, and effective during the early design stages. Much faster than coding a digital prototype. The key text here is Sketching User Experience: Getting the design right and the right design (Bill Buxton). Sketching is a particularly useful skill when working with IA/ID/UX designers. Your collaboration will be more effective. For a good primer on how and why designers sketch, watch the presentation How to be a UX team of one by Leah Buley from the 2008 IA Summit.
Learn paper prototyping. The fastest way to iteratively test an interface before you write code. Different from sketching and usability testing. The definitive book here is Paper Prototyping (Carolyn Snyder). You can get a good DVD on this from the Nielsen Norman Group.
Learn usability testing. Discount testing is easy and effective. But for many UIs, usability is hard to do well. You can learn the basics quickly, but good usability people are invaluable. If you want a book, the classic is The Handbook of Usability Testing (Jeffrey Rubin). It's older but offers thorough coverage of lab-based testing. The famous starter book is Don't Make Me Think (2nd Ed) (Steve Krug). I caution people about this one: Krug makes it sound easier than it is. But it is a good starting point. The user research books listed in the next point also cover this topic. And you can find piles about it online.
Learn about information architecture. The main book here is Information Architecture for the World Wide Web (3rd) (Louis Rosenfeld & Peter Morville). A good starter book is Information Architecture: Blueprints for the Web (Christina Wodtke). For more, visit the Information Architecture Institute or attend the annual Information Architecture Summit.
Learn about interaction design. The main book here is The Essentials of Interaction Design (3rd) (Alan Cooper, et al). A good starter book is Designing for interaction (Dan Saffer). For more, visit the Interaction Design Association (IxDA) or attend the annual Interaction Design conference.
Learn fundamentals of graphic design. Graphic design is not UI design, but concepts from graphic design can improve an interface. Graphic design introduces design principles for the visual presentation of information, such as proximity, alignment, and small multiples. I recommend reading The non-designer's design book (Robin Williams) and Envisioning Information (Edward Tufte)
Learn to do user research. Where usability tests an interface, user research tries to model users and their tasks through personas, scenarios, user journeys, and other documents. It's about understanding users and what they do, then using that to inform the design instead of guessing. Some techniques are interviews, surveys, diary studies, and cart sorting. Good books on this are Observing the User Experience (Mike Kuniavsky) and Understanding Your Users (Courage & Baxter)
Learn to do field research. Watching people in the lab under artificial conditions helps (ie: usability), but there is nothing like watching people use your code in context: their home, their office, or wherever they use it. Goes by various names, including ethnography, field studies, and contextual inquiry. Here is a good primer on field research. Two of the better known books here are Rapid Contextual Design (Karen Holtzblatt et al) and User and task analysis for interface design (Hackos & Redish).
Read UX design web sites. Some of the big ones are Boxes & Arrows, UX Mag, UX Matters, and Digital Web magazine.
Use UI pattern libraries. There are patterns for interfaces. For web sites, I recommend The Design of Sites, 2nd ed (Van Duyne, et al) and Homepage usability: 50 websites deconstructed (Jakob Nielsen & Marie Tahir). For desktop applications I recommend Designing interfaces (Jennifer Tidwell), and for web applications I recommend Designing Web Interfaces: Principles and Patterns for Rich Interactions (Bill Scott & Theresa Neil). Online you should check Welie pattern library, UI patterns, and Web UI patterns.
Attend UX design conferences. Some good annual conferences are: Information Architecture Summit, Interaction '09 (IxDA), User Interface, and UX week.
Attend a workshop or webinar. You can take workshops, webinars, and online courses. This is far from a comprehensive list, but you might try the UIE virtual seminars, Adaptive Path virtual seminars, and UX webinars from Rosenfeld Media.
Get a degree. A graduate degree in HCI is one approach, but these programs are mostly about writing coding. If you want to learn about the design of digital artifacts and devices, then you want a graduate program that's not in CS. Some options include Interaction Design at Carnegie Mellon, the d-School at Stanford, the ITP program at NYU, and Information Architecture & Knowledge Management at Kent State (disclosure: I'm on faculty at Kent; we are seeing more and more people with CS degrees moving into UX design instead of management, which is interesting, because management is the traditional path for developers who want to move away from writing code while staying in their field). There are many more programs. Each has their own perspective, areas of emphasis, and technical expectations. Some come out of the arts and visual design, others out of library and information science, and some from CS. Most are hybrids, but every hybrid has deeper roots in one or more fields. If this interests you, look around and try to understand the differences between these programs. Some offer online courses and certificate programs in addition to full-fledged degrees.
Why UI design is hard
Good UI design is hard because it involves 2 vastly different skills:
A deep understanding of the machine. People in this group worry about code first, people second. They have deep technological knowledge and skill. We call them developers, programmers, engineers, and so forth.
A deep understanding of people and design: People in this group worry about people first, code second. They have deep knowledge of how people interact with information, computers, and the world around them. We call them user experience designers, information architects, interaction designers, usability engineers, and so forth.
This is the essential difference between these 2 groups—between developers and designers:
Developers make it work. They implement the functionality on your TiVo, your iPhone, your favorite website, etc. They make sure it actually does what it is supposed to do. Their highest priority is making it work.
Designers make people love it. They figure out how to interact with it, how it should look, and how it should feel. They design the experience of using the application, the web site, the device. Their highest priority is making you fall in love with what developers make. This is what is meant by user experience, and it's not the same as brand experience.
Moreover, programming and design require different mindsets, not just different knowledge and different skills. Good UI design requires both mindsets, both knowledge bases, both skill groups. And it takes years to master either one.
Developers should expect to find UI design hard, just as UI designers should expect to find writing code hard.
What really helps me improve my design is to grab a fellow developer, one the QA guys, the PM, or anyone who happens to walk by and have them try out a particular widget or screen.
Its amazing what you will realize when you watch someone else use your software for the first time
Ultimately, it's really about empathy -- can you put yourself in the shoes of your user?
One thing that helps, of course, is "eating your own dogfood" -- using your applications as a real user yourself, and seeing what's annoying.
Another good idea is to find a way to watch a real user using your application, which may be as complicated as a usability lab with one-way mirrors, screen video capture, video cameras on the users, etc., or can be as simple as paper prototyping using the next person who happens to walk down the hall.
If all else fails, remember that it's almost always better for the UI to be too simple than too complicated. It's very very easy to say "oh, I know how to solve that, I'll just add a checkbox so the user can decide which mode they prefer". Soon your UI is too complicated. Pick a default mode and make the preference setting an advanced configuration option. Or just leave it out.
If you read a lot about design you can easily get hung up on dropped shadows and rounded corners and so forth. That's not the important stuff. Simplicity and discoverability are the important stuff.
Contrary to popular myth there are literally no soft aspects in UI design, at least no more than needed to design a good back end.
Consider the following; good back end design is based upon fairly solid principles and elements any good developer is familiar with:
low coupling
high cohesion
architectural patterns
industry best practices
etc
Good back end design is usually born through a number of interactions, where based on the measurable feedback obtained during tests or actual use the initial blueprint is gradually improved. Sometimes you need to prototype smaller aspects of back end and trial them in isolation etc.
Good UI design is based on the sound principles of:
visibility
affordance
feedback
tolerance
simplicity
consistency
structure
UI is also born through test and trial, through iterations but not with compiler + automated test suit, but people. Similarly to back end there are industry best practises, measurement and evaluation techniques, ways to think of UI and set goals in terms of user model, system image, designer model, structural model, functional model etc.
The skill set needed for designing UI is quite different from designing back-end and hence don’t expect to be able to do good UI without doing some learning first. However that both these activities have in common is the process of design. I believe that anyone who can design good software is capable of designing good UI as long as they spend some time learning how.
I recommend taking a course in Human Computer Interaction, check MIT and Yale site for example for online materials:
MIT User Interface Design and Implementation Course
Structural vs Functional Model in Understanding and Usage
The excellent earlier post by Thorsten79 brings up the topic of software development experts vs users and how their understanding of software differ. Human learning experts distinguish between functional and structural mental models. Finding way to your friend's house can be an excellent example of the difference between the two:
First approach includes a set of detailed instructions: take the first exit of the motorway, then after 100 yards turn left etc. This is an example of functional model: list of concrete steps necessary to achieve a certain goal. Functional models are easy to use, they do not require much thinking just a straight forward execution. Obviously there is a penalty for the simplicity: it might not be the most efficient route and any any exceptional situation (i.e. a traffic diversion) can easilly lead to a complete failure.
A different way to cope with the task is to build a structural mental model. In our example that would be a map that conveyes a lot of information about the internal structure of the "task object". From understanding the map and relative locations of our and friend's house we can deduct the functional model (the route). Obviously it's requires more effort, but much more reliable way of completing the task in spite of the possible deviations.
The choice between conveying functional or structural model through UI (for example, wizard vs advanced mode) is not that straight forward as it might seem from Thorsten79's post. Advanced and frequent users might well prefer the structural model, whereas occassional or less expirienced users — functional.
Google maps is a great example: they include both functional and structural model, so do many sat navs.
Another dimension of the problem is that the structural model presented through UI must not map to the structure of software, but rather naturally map to structure of the user task at hand or task object involved.
The difficulty here is that many developers will have a good structural model of their software internals, but only functional model of the user task the software aims to assist at. To build good UI one needs to understand the task/task object structure and map UI to that structure.
Anyway, I still can't recommend taking a formal HCI course strongly enough. There's a lot of stuff involved such as heuristics, principles derived from Gestalt phychology, ways humans learn etc.
I suggest you start by doing all your UI in the same way as you are doing now, with no focus on usability and stuff.
alt text http://www.stricken.org/uploaded_images/WordToolbars-718376.jpg
Now think of this:
A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away.
— Saint-Exupéry
And apply this in your design.
A lot of developers think that because they can write code, they can do it all. Designing an interface is a completely different skill, and it was not taught at all when I attended college. It's not just something that just comes naturally.
Another good book is The Design of Everyday Things by Donald Norman.
There's a huge difference between design and aesthetics, and they are often confused.
A beautiful UI requires artistic or at least aesthetic skills that many, including myself, are incapable of producing. Unfortunately, it is not enough and does not make the UI usable, as we can see in many heavyweight flash-based APIs.
Producing usable UIs requires an understanding of how humans interact with computers, some issues in psychology (e.g., Fitt's law, Hick's law), and other topics. Very few CS programs train for this. Very few developers that I know will pick a user-testing book over a JUnit book, etc.
Many of us are also "core programmers", tending to think of UIs as the facade rather than as a factor that could make or break the success of our project.
In addition, most UI development experience is extremely frustrating. We can either use toy GUI builders like old VB and have to deal with ugly glue code, or we use APIs that frustrate us to no end, like trying to sort out layouts in Swing.
Go over to Slashdot, and read the comments on any article dealing with Apple. You'll find a large number of people talking about how Apple products are nothing special, and ascribing the success of the iPod and iPhone to people trying to be trendy or hip. They will typically go through feature lists, and point out that they do nothing earlier MP3 players or smart phones didn't do.
Then there are people who like the iPod and iPhone because they do what the users want simply and easily, without reference to manuals. The interfaces are about as intuitive as interfaces get, memorable, and discoverable. I'm not as fond of the UI on MacOSX as I was on earlier versions, I think they've given up some usefulness in favor of glitz, but the iPod and iPhone are examples of superb design.
If you are in the first camp, you don't think the way the average person does, and therefore you are likely to make bad user interfaces because you can't tell them from good ones. This doesn't mean you're hopeless, but rather that you have to explicitly learn good interface design principles, and how to recognize a good UI (much as somebody with Asperger's might need to learn social skills explicitly). Obviously, just having a sense of a good UI doesn't mean you can make one; my appreciation for literature, for example, doesn't seem to extend to the ability (currently) to write publishable stories.
So, try to develop a sense for good UI design. This extends to more than just software. Don Norman's "The Design of Everyday Things" is a classic, and there's other books out there. Get examples of successful UI designs, and play with them enough to get a feel for the difference. Recognize that you may be having to learn a new way of thinking abou things, and enjoy it.
The main rule of thumb I hold to, is never try to do both at once. If I'm working on back-end code, I'll finish up doing that, take a break, and return with my UI hat on. If you try to work it in whilst you're doing code, you'll approach it with the wrong mindset, and end up with some horrible interfaces as a result.
I think it's definitely possible to be both a good back-end developer and a good UI designer, you just have to work at it, do some reading and research on the topic (everything from Miller's #7, to Nielsen's archives), and make sure you understand why UI design is of the utmost importance.
I don't think it's a case of needing to be creative but rather, like back-end development, it is a very methodical, very structured thing that needs to be learned. It's people getting 'creative' with UIs that creates some of the biggest usability monstrosities... I mean, take a look at 100% Flash websites, for a start...
Edit: Krug's book is really good... do take a read of it, especially if you're going to be designing for the Web.
There are many reasons for this.
(1) Developer fails to see things from the point of view of the user. This is the usual suspect: lack of empathy. But it is not usually true since developers are not as alien as people make them out to be.
(2) Another, more common reason is that the developer being so close to his own stuff, having stayed with his stuff for so long, fails to realize that his stuff may not be so familiar(a term better than intuitive) to other people.
(3) Still another reason is the developer lacks techniques.
MY BIG CLAIM: read any UI, human interection design, prototyping book. e.g. Designing the Obvious: A Common Sense Approach to Web Application Design, Don't Make Me Think: A Common Sense Approach to Web Usability, Designing the moment, whatever.
How do they discuss task flows? How do they describe decision points? That is, in any use case, there are at least 3 paths: success, failure/exception, alternative.
Thus, from point A, you can go to A.1, A.2, A.3.
From point A.1, you can get to A.1.1, A.1.2, A.1.3, and so on.
How do they show such drill-down task flow?
They don't. They just gloss over it.
Since even UI experst don't have a technique, developers have no chance.
He thinks it is clear in his head. But it is not even clear on paper, let alone clear in software implementation.
I have to use my own hand-made techniques for this.
I try to keep in touch with design-specific websites and texts. I found also the excellent Robin Williams book The Non-Designer's Design Book to be very interesting in these studies.
I believe that design and usability is a very important part of software engineering and we should learn it more and stop giving excuses that we are not supposed to do design.
Everyone can be a designer once in a while, as also everyone can be a programmer.
When approaching UI design, here are a few of the things I keep in mind throughout (by far not a complete list):
Communicating a model. The UI is a narrative that explains a mental model to the user. This model may be a business object, a set of relationships, what have you. The visual prominence, spatial placement, and workflow ordering all play a part in communicating this model to the user. For example, a certain kind of list vs another implies different things, as well as the relationship of what's in the list to the rest of the model. In general I find it best to make sure only one model is communicated at a time. Programmers frequently try to communicate more than one model, or parts of several, in the same UI space.
Consistency. Re-using popular UI metaphors helps a lot. Internal consistency is also very important.
Grouping of tasks. Users should not have to move the mouse all the way across the screen to verify or complete a related sequence of commands. Modal dialogs and flyout-menus can be especially bad in this area.
Knowing your audience. If your users will be doing the same activities over and over, they will quickly become power users at those tasks and be frustrated by attempts to lower the initial entry barrier. If your users do many different kinds of activities infrequently, it's best to ensure the UI holds their hand the whole time.
Read Apple Human Interface Guidelines.
I find the best tool in UI design is to watch a first-time User attempt to use the software. Take loads of notes and ask them some questions. Never direct them or attempt to explain how the software works. This is the job of the UI (and well written documentation).
We consistently adopt this approach in all projects. It is always fascinating to watch a User deal with software in a manner that you never considered before.
Why is UI design so hard? Well generally because the Developer and User never meet.
duffymo just reminded me why: Many Programmers think "*Design" == "Art".
Good UI design is absolutely not artistic. It follows solid principles, that can be backed up with data if you've got the time to do the research.
I think all programmers need to do is take the time to learn the principles. I think it's in our nature to apply best practice whenever we can, be it in code or in layout. All we need to do is make ourselves aware of what the best practices are for this aspect of our job.
What have I done to become better at UI design?
Pay attention to it!
It's like how ever time you see a chart on the news or an electronic bus sign and you wonder 'How did they get that data? Did they do that with raw sql or are they using LINQ?' (or insert your own common geek curiosity here).
You need to start doing that but with visual elements of all kinds.
But just like learning a new language, if you don't really throw yourself into it, you won't ever learn it.
Taken from another answer I wrote:
Learn to look, really look, at the world around you. Why do I like that UI but hate this one? Why is it so hard to find the noodle dishes in this restaurant menu? Wow, I knew what that sign meant before I even read the words. Why was that? How come that book cover looks so wrong? Learn to take the time to think about why you react the way you do to visual elements of all kinds, and then apply this to your work.
However you do it (and there are some great points above), it really helped me once I accepted that there is NO SUCH THING AS INTUITIVE....
I can hear the arguments rumbling on the horizon... so let me explain a little.
Intuitive: using what one feels to be right or true based on an unconscious method or feeling.
If (as Carl Sagan postulated) you accept that you cannot comprehend things that are absolutely unlike anything you have ever encountered then how could you possibly "know" how to use something if you have never used anything remotely like it?
Think about it: kids try to open doors not because they "know" how a doorknob works, but because they have seen someone else do it... often they turn the knob in the wrong direction, or pull too soon. They have to LEARN how a doorknob works. This knowledge then gets applied in different but similar instances: opening a window, opening a drawer, opening almost anything big with a big, knob-looking handle.
Even simple things that seem intuitive to us will not be intuitive at all to people from other cultures. If someone held their arm out in front of them and waived their hand up-and-down at the wrist while keeping the arm still.... are they waiving you away? Probably, unless you are in Japan. There, this hand signal can mean "come here". So who is right? Both, of course, in their own context. But if you travel to both, you need to know both... UI design.
I try to find the things that are already "familiar" to the potential users of my project and then build the UI around them: user-centric design.
Take a look at Apple's iPhone. Even if you hate it, you have to respect the amount of thought that went into it. Is it perfect? Of course not. Over time an object's perceived "intuitiveness" can grow or even fade away completely.
For example. Most everyone knows that a strip of black with two rows of holes along the top and bottom looks like a film strip... or do they?
Ask your average 9 or 10 year old what they think it is. You may be surprised how many kids right now will have a hard time identifying it as a film strip, even though it is something that is still used to represent Hollywood, or anything film (movie) related. Most movies for the past 20 years have been digitally shot. And when was the last time any of us held a piece of film of ANY kind, photos or film?
So, what it all boils down to for me is: Know your audience and constantly research to keep up with trends and changes in things that are "intuitive", target your main users and try not to do things that punish the inexperienced in favor of the advanced users or slow down the advanced users in order to hand-hold the novices.
Ultimately, every program will require a certain amount of training on the user's part to use it. How much training and for which level of user is part of the decisions that need to be made.
Some things are more or less familiar based on your target user's past experience level as a human being, or computer user, or student, or whatever.
I just shoot for the fattest part of the bell curve and try to get as many people as I can but realizing that I will never please everyone....
I know that Microsoft is rather inconsistent with their own guidelines, but I have found that reading their Windows design guidelines have really helped me. I have a copy on my website here, just scroll down a little the the Vista UX Guide. It has helped me with things such as colors, spacing, layouts, and more.
I believe the main problem has nothing to do with different talents or skillsets. The main problem is that as a developer, you know too much about what the application does and how it does it, and you automatically design your UI from the point of view of someone who has that knowledge.
Whereas a user typically starts out knowing absolutely nothing about the application and should never need to learn anything about its inner workings.
It is very hard, almost impossible, to not use knowledge that you have - and that's why an UI should not be designed by someone who's developing the app behind it.
"Designing from both sides of the screen" presents a very simple but profound reason as to why programmers find UI design hard: programmers are trained to think in terms of edge cases while UI designers are trained to think in terms of common cases or usage.
So going from one world to the other is certainly difficult if the default traning in either is the exact opposite of the other.
To say that programms suck at UI design is to miss the point. The point of the problem is that the formal training that most developers get go indepth with the technology. Human - Computer Interaction is not a simple topic. It is not something that I can "mind-meld" to you by providing a simple one line statement that makes you realize "oh the users will use this application more effectively if I do x instead of y."
This is because there is one part of UI design that you are missing. The human brain. In order to understand how to design a UI, you have to understand how the human mind interacts with machinery. There is an excellent course I took at the University of Minnesota on this topic taught by a professor of Psychology. It is named "Human - Machine Interaction". This describes many of the reasons of why UI design is so complicated.
Since Psychology is based on Correlations and not Causality you can never prove that a method of UI design will always work in any given situation. You can correlate that many users will find a particular UI design appealing or efficient, but you cannot prove that it will always generalize.
Additionally, there are two parts to UI design that many people seem to miss. There is the aesthetical appeal, and the functional workflow. If you go for a 100% aesthetical appeal, sure people will but your product. I highly doubt that aesthetics will ever reduce user frustration though.
There are several good books on this topic and course to take (like Bill Buxton's Sketching User Experiences, and Cognition in the Wild by Edwin Hutchins). There are graduate programs on Human - Computer Interaction at many universities.
The overall answer to this question though lies in how individuals are taught computer science. It is all math based, logic based and not based on the user experience. To get that, you need more than a generic 4 year computer science degree (unless your 4 year computer science degree had a minor in psychology and was emphasized in Human - Computer Interaction).
Let's turn your question around -
Are "ui designers" doomed to only design information architecture and presentation layers? Is there something they can do to retrain their brains to be more effective at designing pleasing and efficient system layers?
Seems like them "ui designers" would have to take a completely different perspective - they'd have to look from the inside of the box outwards; instead of looking in from outside the box.
Alan Cooper's "The Inmates are Running the Asylum" opinion is that we can't successfully take both perspectives - we can learn to wear one hat well but we can't just switch hats.
I think its because a good UI is not logical. A good UI is intuitive.
Software developers typically do bad on 'intuitive'
A useful framing is to actively consider what you're doing as designing a process of communication. In a very real sense, your interface is a language that the user must use to tell the computer what to do. This leads to considering a number of points:
Does the user already speak this language? Using a highly idiosyncratic interface is like communicating in a language you've never spoken before. So if your interface must be idiosyncratic at all, it had best introduce itself with the simplest of terms and few distractions. On the other hand, if your interface uses idioms that the user is accustomed to, they'll gain confidence from the start.
The enemy of communication is noise. Auditory noise interferes with spoken communication; visual noise interferes with visual communication. The more noise you can cut out of your interface, the easier communicating with it will be.
As in human conversation, it's often not what you say, it's how you say it. The way most software communicates is rude to a degree that would get it punched in the face if it were a person. How would you feel if you asked someone a question and they sat there and stared at you for several minutes, refusing to respond in any other way, before answering? Many interface elements, like progress bars and automatic focus selection, have the fundamental function of politeness. Ask yourself how you can make the user's day a little more pleasant.
Really, it's somewhat hard to determine what programmers think of interface interaction as being, other than a process of communication, but maybe the problem is that it doesn't get thought of as being anything at all.
There are a lot o good comments already, so I am not sure there is much I can add.
But still...
Why would a developer expect to be able to design good UI?
How much training did he had in that field?
How many books did he read?
How many things did he designed over how many years?
Did he had the opportunity to see the reaction of it's users?
We don't expect that a random "Joe the plumber" to be able to write good code.
So why would we expect the random "Joe the programmer" to design good UI?
Empathy helps. Separating the UI design and the programming helps. Usability testing helps.
But UI design is a craft that has to be learned, and practiced, like any other.
Developers are not (necessarily) good at UI design for the same reason they aren't (necessarily) good at knitting; it's hard, it takes practice, and it doesn't hurt to have someone show you how in the first place.
Most developers (me included) started "designing" UIs because it was a necessary part of writing software. Until a developer puts in the effort to get good at it, s/he won't be.
To improve just look around at existing sites. In addition to the books already suggested, you might like to have a look at Robin Williams's excellent book "The Non-designers Design Book" (sanitised Amazon link)
Have a look at what's possible in visual design by taking a look at the various submissions over at The Zen Garden as well.
UI design is definitely an art though, like pointers in C, some people get it and some people don't.
But at least we can have a chuckle at their attempts. BTW Thanks OK/Cancel for a funny comic and thanks Joel for putting it in your book "The Best Software Writing I" (sanitised Amazon link).
User interface isn't something that can be applied after the fact, like a thin coat of paint. It is something that needs to be there at the start, and based on real research. There's tons of Usability research available of course. It needs to not just be there at the start, it needs to form the core of the very reason you're making the software in the first place: There's some gap in the world out there, some problem, and it needs to be made more usable and more efficient.
Software is not there for its own sake. The reason for a peice of software to exist is FOR PEOPLE. It's absolutely ludicrous to even try to come up with an idea for a new peice of software, without understanding why anyone would need it. Yet this happens all the time.
Before a single line of code is written, you should go through paper versions of the interface, and test it on real people. This is kind of weird and silly, it works best with kids, and someone entertaining acting as "the computer".
The interface needs to take advantage of our natural cognitive facilities. How would a caveman use your program? For instance, we've evolved to be really good at tracking moving objects. That's why interfaces that use physics simulations, like the iphone, work better than interfaces where changes occur instantaneously.
We are good at certain kinds of abstraction, but not others. As programmers, we're trained to do mental gymnastics and backflips to understand some of the weirdest abstractions. For instance, we understand that a sequence of arcane text can represent and be translated into a pattern of electromagnetic state on a metal platter, which when encountered by a carefully designed device, leads to a sequence of invisible events that occur at lightspeed on an electronic circuit, and these events can be directed to produce a useful outcome. This is an incredibly unnatural thing to have to understand. Understand that while it's got a perfectly rational explanation to us, to the outside world, it looks like we're writing incomprehensible incantations to summon invisible sentient spirits to do our bidding.
The sorts of abstractions that normal humans understand are things like maps, diagrams, and symbols. Beware of symbols, because symbols are a very fragile human concept that take conscious mental effort to decode, until the symbol is learned.
The trick with symbols is that there has to be a clear relationship between the symbol, and the thing it represents. The thing it represents either has to be a noun, in which case the symbol should look VERY MUCH like the thing it represents. If a symbol is representing a more abstract concept, that has to be explained IN ADVANCE. See the inscrutable unlabled icons in msword's, or photoshop's toolbar, and the abstract concepts they represent. It has to be LEARNED that the crop tool icon in photoshop means CROP TOOL. it has to be understood what CROP even means. These are prerequisites to correctly using that software. Which brings up an important point, beware of ASSUMED knowledge.
We only gain the ability to understand maps around the age of 4. I think I read somewhere once that chimpanzees gain the ability to understand maps around the age of 6 or 7.
The reason that guis have been so successful to begin with, is that they changed a landscape of mostly textual interfaces to computers, to something that mapped the computer concepts to something that resembled a physical place. Where guis fail in terms of usability, is where they stop resembling something you'd see in real life. There are invisible, unpredictable, incomprehensible things that happen in a computer that bare no resemblance to anything you'd ever see in the physical world. Some of this is necessary, since there'd be no point in just making a reality simulator- The idea is to save work, so there has to be a bit of magic. But that magic has to make sense, and be grounded in an abstraction that human beings are well adapted to understanding. It's when our abstractions start getting deep, and layered, and mismatched with the task at hand that things break down. In other words, the interface doesn't function as a good map for the underlying software.
There are lots of books. The two I've read, and can therefore reccomend, are "The Design of Everyday Things" by donald norman, and "The Human Interface" by Jef Raskin.
I also reccomend a course in psychology. "The Design of Every day Things" talks about this a bit. A lot of interfaces break down because of a developer's "folk understanding" of psychology. This is similar to "folk physics". An object in motion stays in motion doesn't make any sense to most people. "You have to keep pushing it to keep it in motion!" thinks the physics novice. User testing doesn't make sense to most developers. "You can just ask the users what they want, and that should be good enough!" thinks the psychology novice.
I reccomend Discovering Psychology, a PBS documentary series, hosted by Philip Zimbardo. Failing that, try and find a good physics textbook. The expensive kind. Not the pulp fiction self help crap that you find in Borders, but the thick hardbound stuff you can only find in a university library. This is a necesesary foundation. You can do good design without it, but you'll only have an intuitive understanding of what's going on. Reading some good books will give you a good perspective.
If you read the book "Why software sucks" you would have seen Platt's answer, which is a simple one:
Developers prefere control over user-friendliness
Average people prefere user-friendliness over control
But another another answer to your question would be "why is dentistry so hard for some developers?" - UI design is best done by a UI designer.
http://dotmad.net/blog/2007/11/david-platt-on-why-software-sucks/

Legacy Code Nightmare [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I've inherited a project where the class diagrams closely resemble a spider web on a plate of spaghetti. I've written about 300 unit tests in the past two months to give myself a safety net covering the main executable.
I have my library of agile development books within reach at any given moment:
Working Effectively with Legacy Code
Refactoring
Code Complete
Agile Principles Patterns and Practices in C#
etc.
The problem is everything I touch seems to break something else.
The UI classes have business logic and database code mixed in. There are mutual dependencies between a number of classes. There's a couple of god classes that break every time I change any of the other classes. There's also a mutant singleton/utility class with about half instance methods and half static methods (though ironically the static methods rely on the instance and the instance methods don't).
My predecessors even thought it would be clever to use all the datasets backwards. Every database update is sent directly to the db server as parameters in a stored procedure, then the datasets are manually refreshed so the UI will display the most recent changes.
I'm sometimes tempted to think they used some form of weak obfuscation for either job security or as a last farewell before handing the code over.
Is there any good resources for detangling this mess? The books I have are helpful but only seem to cover half the scenarios I'm running into.
It sounds like you're tackling it in the right way.
Test
Refactor
Test again
Unfortunately, this can be a slow and tedious process. There's really no substitute for digging in and understanding what the code is trying to accomplish.
One book that I can recommend (if you don't already have it filed under "etc.") is Refactoring to Patterns. It's geared towards people who are in your exact situation.
I'm working in a similar situation.
If it is not a small utility but a big enterprise project then it is:
a) too late to fix it
b) beyond the capabilities of a single person to attempt a)
c) can only be fixed by a complete rewriting of the stuff which is out of the question
Refactoring can in many cases be only attempted in your private time at your personal risk. If you don't get an explicit mandate to do it as part of you daily job then you're likely not even get any credit for it. May even be criticized for "pointlessly wasting time on something that has perfectly worked for a long time already".
Just continue hacking it the way it has been hacked before, receive your paycheck and so on. When you get completely frustrated or the system reaches the point of being non-hackable any further, find another job.
EDIT: Whenever I attempt to address the question of the true architecture and doing the things the right way I usually get LOL in my face directly from responsible managers who are saying something like "I don't give a damn about good architecture" (attempted translation from German). I have personally brought one very bad component to the point of non-hackability while of course having given advanced warnings months in advance. They then had to cancel some promised features to customers because it was not doable any longer. Noone touches it anymore...
I've worked this job before. I spent just over two years on a legacy beast that is very similar. It took two of us over a year just to stabilize everything (it's still broke, but it's better).
First thing -- get exception logging into the app if it doesn't exist already. We used FogBugz, and it took us about a month to get reporting integrated into our app; it wasn't perfect right away, but it was reporting errors automatically. It's usually pretty safe to implement try-catch blocks in all your events, and that will cover most of your errors.
From there fix the bugs that come in first. Then fight the small battles, especially those based on the bugs. If you fix a bug that unexpectedly affects something else, refactor that block so that it is decoupled from the rest of the code.
It will take some extreme measures to rewrite a big, critical-to-company-success application no matter how bad it is. Even you get permission to do so, you'll be spending too much time supporting the legacy application to make any progress on the rewrite anyway. If you do many small refactorings, eventually either the big ones won't be that big or you'll have really good foundation classes for your rewrite.
One thing to take away from this is that it is a great experience. It will be frustrating, but you will learn a lot.
I have (once) come across code that was so insanely tangled that I couldn't fix it with a functional duplicate in a reasonable amount of time. That was sort of a special case though, as it was a parser and I had no idea how many clients might be "using" some of the bugs it had. Rendering hundreds of "working" source files erroneous was not a good option.
Most of the time it is imminently doable, just daunting. Read through that refactoring book.
I generally start fixing bad code by moving things around a bit (without actually changing implementation code more than required) so that modules and classes are at least somewhat coherent.
When that is done, you can take your more coherent class and rewrite its guts to perform the exact same way, but this time with sensible code. This is the tricky part with management, as they generally don't like to hear that you are going to take weeks to code and debug something that will behave exactly the same (if all goes well).
During this process I guarantee you will discover tons of bugs, and outright design stupidities. It's OK to fix trivial bugs while recoding, but otherwise leave such things for later.
Once this is done with a couple of classes, you will start to see where things can be modularized better, designed better, etc. Plus it will be easier to make such changes without impacting unrelated things because the code is now more modular, and you probably know it thoroughly.
Mostly, that sounds pretty bad. But I don't understand this part:
My predecessors even thought it would
be clever to use all the datasets
backwards. Every database update is
sent directly to the db server as
parameters in a stored procedure, then
the datasets are manually refreshed so
the UI will display the most recent
changes.
That sounds pretty close to a way I frequently write things. What's wrong with this? What's the correct way?
If your refactorings are breaking code, particularly code that seems to be unrelated, then you're trying to do too much at a time.
I recommend a first-pass refactoring where all you do is ExtractMethod: the goal is simply to name each step in the code, without any attempts at consolidation whatsoever.
After that, think about breaking dependencies, replacing singletons, consolidation.
If your refactorings are breaking things, then it means you don't have adequate unit test coverage - as the unit tests should have broken first. I recommend you get better unit test coverage second, after getting exception logging into place.
I then recommend you do small refactorings first - Extract Method to break large methods into understandable pieces; Introduce Variable to remove some duplication within a method; maybe Introduce Parameter if you find duplication between the variables used by your callers and the callee.
And run the unit test suite after each refactoring or set of refactorings. I'd say run them all until you gain confidence about which tests will need to be rerun every time.
No book will be able to cover all possible scenarios. It also depends on what you'll be expected to do with the project and whether there is any kind of external specification.
If you'll only have to do occasional small changes, just do those and don't bother starting to refactor.
If there is a specification (or you can get someone to write it), consider a complete rewrite if it can be justified by the foreseeable amount of changes to the project
If "the implementation is the specification" and there are a lot of changes planned, then you're pretty much hosed. Write LOTS of unit tests and start refactoring in small steps.
Actually, unit tests are going to be invaluable no matter what you do (if you can write them to an interface that's not going to change much with refactorings or a rewrite, that is).
See blog post Anatomy of an Anti-Corruption Layer, Part 1 and Anatomy of an Anti-Corruption Layer, Part 2.
It cites Eric Evans, Domain-Driven Design: Tackling Complexity in the Heart of Software:
Access the crap behind a facade
You could extract and then refactor some part of it, to break the dependencies and isolate layers into different modules, libraries, assemblies, directories. Then you re-inject the cleaned parts in to the application with a strangler application strategy. Lather, rinse, repeat.
Good luck, that is the tough part of being a developer.
I think your approach is good, but you need to focus on delivering business value (number of unit tests is not a measure of business value, but it may give you an indication if you are on or off track). It's important to have identified the behaviors that need to be changed, prioritize, and focus on the top ones.
The other piece of advise is to remain humble. Realize that if you wrote something so large under real deadlines and someone else saw your code, they would probably have problems understanding it as well. There is a skill in writing clean code, and there is a more important skill in dealing with other people's code.
The last piece of advise is to try to leverage the rest of your team. Past members may know information about the system you can learn. Also, they may be able to help test behaviors. I know the ideal is to have automated tests, but if someone can help by verifying things for you manually consider getting their help.
I particularly like the diagram in Code Complete, in which you start with just legacy code, a rectangle of fuzzy grey texture. Then when you replace some of it, you have fuzzy grey at the bottom, solid white at the top, and a jagged line representing the interface between the two.
That is, everything is either 'nasty old stuff' or 'nice new stuff'. One side of the line or the other.
The line is jagged, because you're migrating different parts of the system at different rates.
As you work, the jagged line gradually descends, until you have more white than grey, and eventually just grey.
Of course, that doesn't make the specifics any easier for you. But it does give you a model you can use to monitor your progress. At any one time you should have a clear understanding of where the line is: which bits are new, which are old, and how the two sides communicate.
You might find the following post useful:
http://refactoringin.net/?p=36
As it is said in the post, don't discard a complete overwrite that easily. Also, if at all possible, try to replace whole layers or tiers with third-party solution like for example ORM for persistence or with new code. But most important of all, try to understand the logic (problem domain) behind the code.

Design & Coding - top to bottom or bottom to top? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
When coding, what in your experience is a better approach?
Break the problem down into small enough pieces and then implement each piece.
Break the problem down, but then implement using a top-down approach.
Any other?
I tend to design top-down and implement bottom-up.
For implementation, building the smallest functional pieces and assembling them into the higher-level structures seems to be what works best for me. But, for design, I need to start from the overall picture and break it down to determine what those pieces will be.
Here's what I do:
Understand the domain first. Understand the problem to be solved. Make sure you and the customer (even if that customer is you!) are on the same page as to what problem is to be solved.
Then a high level solution is proposed to the problem and from that, the design will turn into bubbles or bullets on a page or whatever, but the point is that it will shake out into components that can be designed.
At that point, I write tests for the yet-to-be written classes and then flesh out the classes to pass those tests.
I use a test-first approach and build working, tested components. That is what works for me. When the component interfaces are known and the 'rules' are known for how they talk to each other and provide services to each other, then it becomes generally a straightforward 'hook everything together' exercise.
That's how I do it, and it has worked well for me.
You might want to look over the Agile Manifesto. Top down and bottom up are predicated on Built It All At Once design and construction.
The "Working software over comprehensive documentation" means the first thing you build is the smallest useful thing you can get running. Top? Bottom? Neither.
When I was younger, I worked on projects that were -- by contract -- strictly top down. This doesn't work. Indeed, it can't work. You get mountains of redundant design and code as a result. It was not a sound approach when applied mindlessly.
What I've noticed is that the Agile approach -- small pieces that work -- tends to break the problem down to parts that can be grasped all at once. The top-down/bottom-up no longer matters as much. Indeed, it may not matter at all.
Which leads do: "How do you decompose for Agile development?" The trick is to avoid creating A Big Thing that you must then decompose. If you analyze a problem, you find actors trying to accomplish use cases and failing because they don't have all the information, or they don't have it in time, or they can't execute their decisions, or something like that.
Often, these aren't Big Things that need decomposition. When they are, you need to work through the problem in the Goals Backward direction. From Goals to things that enable you to make that goal to things that enable the enablers, etc. Since goals are often Big Things, this tends to be Top Down -- from general business goal to detailed business process and step.
At some point, we overview these various steps that lead to the goals. We've done the analysis part (breaking things down). Now comes the synthesis part: we reassemble what we have into things we can actually build. Synthesis is Bottom Up. However, let's not get carried away. We have several points of view, each of which is different.
We have a model. This is often built from details into a larger conceptual model. Then, sometimes decomposed again into a model normalized for OLTP. Or decomposed into a star schema normalized for OLAP. Then we work back up to create a ORM mapping from the normalized model. Up - Down - Up.
We have processing. This is often built from summaries of the business processes down into details of processing steps. Then software is designed around the steps. Then the software is broken into classes and methods. Down - Up - Down.
[Digression. With enlightened users, this decomposition defines new job titles and ways of working. With unenlightened users, the old jobs stay and we write mountains of documentation to map old jobs onto new software.]
We have components. We often look at the pieces, look at what we know about available components, and do a kind of matching. This is the randomest process; it's akin to the way crystals form -- there are centers of nucleation and the design kind of solidifies around those centers. Web Services. Database. Transaction Management. Performance. Volume. Different features that somehow help us pick components that implement some or all of our solution. Often feels bottom-up (from feature to product), but sometimes top-down ("I'm holding a hammer, call everything a nail" == use the RDBMS for everything.)
Eventually we have to code. This is bottom up. Kind of. You have to define a package structure. You have to define classes as a whole. That part was top down. You have to write methods within the classes. I often do this bottom-up -- rough out the method, write a unit test, finish the method. Rough out the next method, write a unit test, finish the method.
The driving principle is Agile -- build something that works. The details are all over the map -- up, down, front, back, data, process, actor, subject area, business value.
Yes. Do all of those things.
It may seem sarcastic (sorry, I revert to form), but this really is a case where there is no right answer.
Also in the agile way, write your test(s) first!
Then all software is a continual cycle of
Red - the code fails the test
Green - the code passes the test
Refactor - code improvements that are intention-preserving.
defects, new features, changes. It all follows the same pattern.
Your 2nd option is a reasonable way to go. If you break the problem down into understandable chunks, the top down approach will reveal any major design flaws before you implement all the little details. You can write stubs for lower level functionality to keep everything hanging together.
I think there's more to consider than top- verses bottom-down design. You obviously need to break the design up into manageable units of work but you also need to consider prioritisation etc. And in an iterative development project, you will often redefine the problem for the next iteration once you've delivered the solution for the previous one.
When designing, I like to do middle-out. I like to model the domain, then design out the classes, move to the database and UI from there. If there are specific features that are UI-based or database-based, I may design those up front as well.
When coding, I generally like to do bottom-up (database first, then business entities, then UI) if at all possible. I find it is a lot easier to keep things straight with this method.
I believe that with good software designers (and in my opinion all software developers should also be software designers at some level), the magic is in being able to do top-down and bottom-up simultaneously.
What I was "schooled" to do by my mentors is start by very brief top-down to understand the entities involved, then move to bottom-up to figure out the basic elements I want to create, then to back up and see how I can go one level down, knowing what I know about the results of my bottom up, and so forth until "they meet in the middle".
Hope that helps.
Outside-in design.
You start with what you're trying to achieve at the top end, and you know what you've got to work with at the bottom end. Keep working both ends until they meet in the middle.
I sort of agree with all of the people saying "neither", but everyone falls somewhere on the spectrum.
I'm more of a top-down kind of guy. I pick one high level feature/point/whatever and implement it as a complete program. This lets me sketch out a basic plan and structure within the confines of the problem domain.
Then I start with another feature and refactor out everything from the original that can be used by the second into new, shared entities. Lather, rinse, repeat until application is complete.
However, I know a lot of people who are bottom up guys, who hear a problem and start thinking about all the support subsystems that they could need to build the application on top of it.
I don't believe either approach is wrong or right. They both can achieve results. I even try and find bottom up guys to work with, as we can attack the problem from two different perspectives.
Both are valid approaches. Sometimes one just "feels" more natural than the other. However, there is one big problem: some mainstream languages and especially their frameworks and libraries really heavily on IDE support, such as syntax highlighting, background type checking, background compilation, intelligent code completion, IntelliSense and so on.
However, this doesn't work with top-down coding! In top-down coding, you constantly use variables, fields, constants, functions, procedures, methods, classes, modules, traits, mixins, aspects, packages and types that you haven't implemented yet! So, the IDE will constantly yell at you because of compile errors, there will be red squiggly lines everywhere, you will get no code completion and so on. So, the IDE pretty much prohibits you from doing top-down coding.
I do a variant of top-down. I tend to try and do the interface first - I then use that as my list of features. What's good about this version is, it still works with IDE that would otherwise complain. Just comment out the one function call to what's not yet been implemented.

Resources