Where to start for Tech Specs of an IT project.. more specifically functional specifications - documentation-generation

Hi I'm wonderingif someone knows some good resources on writing project specs.
I'm a freelance developer more focused on actual development and less focused
on tech specs.
I'm involved in a proj where have to write technical
specs (& functional req specs) and have absolutely no idea where to
start.. any good site or sample or book you would advise ?

Strongly recommend User Stories Applied by Mike Cohn

Any non-trivial project requires Tech Spec. By non-trivial I mean more than about 1 week of coding or more than 1 programmer.
Although there no much resources online which may help to realize how to make good Tech Specs. So Let me share my vision in this field.
Every spec should contain:
List item
Title
Overview (general words what about the project is)
Operational purpose (what for the project is, the goal)
Functional purpose (the ways and technical methods/resources attained for fulfilling Operational purpose)
Definitions (to avoid polysemy and for clarification purposes)
DATA AND LISTS (the most important and the biggest part of Tech Spec. The section where described data structures, relational database models, choice of programming languages and tools, algorithms, etc)
Wireframes and pages descriptions
Technical requirements (hosting conditions, system requirements, etc)
Сommissioning and acceptance conditions (all criteria which make your job completed)

Related

Missing requirements in Joel's Functional Spec

I assume most people have read the Painless Functional Specification articles by Joel. In part two, What's a Spec?, a sample spec is provided. However there is no mention of requirements. I have two questions:
How do requirements fit into the sample functional spec? I assume the requirements must be known before a functional spec can be written. So they can't be part of the functional spec, but where are they recorded?
How does test driven development (TDD) fit into the whole func spec / tech spec split Joel outlines (below):
A functional specification describes how a product will work entirely
from the user's perspective. It doesn't care how the thing is
implemented. It talks about features. It specifies screens, menus,
dialogs, and so on.
A technical specification describes the internal implementation of the
program. It talks about data structures, relational database models,
choice of programming languages and tools, algorithms, etc.
Functional design
This is the WHAT.
What are you designing? What will users do with it? What value will it provide them?
The functional spec is the requirements. Each operation the various users perform (create account, log in, view time) is a requirement of the system.
You have to go deeper, though, and ask yourself, "what happens if Mike can't remember his password?" "What does 'exciting' mean to Cindy?" etc. (This is why Joel notes it isn't a complete spec—it is missing many details.)
TDD
Test driven design is the HOW
How do the classes, methods, etc. work? How are errors handled? How does data flow through the code?

Tips on creating user interfaces and optimizing the user experience [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am currently working on a project where a lot of user interaction is going to take place. There is also a commercial side as people can buy certain items and services.
In my opinion a good blend of user interface, speed and security is essential for these types of websites. It is fairly easy to use ajax and JavaScript nowadays to do almost everything, as there are a lot of libraries available such as jQuery and others. But this can have some performance and incompatibility issues. This can lead to users just going to the next website.
The overall look of the website is important too. Where to place certain buttons, where to place certain types of articles such as faq and support. Where and how to display error messages so that the user sees them but are not bothering him. And an overall color scheme is important too.
The basic question is: How to create an interface that triggers a user to buy/use your services
I know psychology also plays a huge role in how users interact with your website. The color scheme for example is important. When the colors are irritating on a website you just want to click away. I have not found any articles that explain those concept.
Does anyone have any tips and/or recourses where i can get some articles that guide you in making the correct choices for your website.
Adhere to some standard UI Design Principles:
The structure principle: Your design
should organize the user interface
purposefully, in meaningful and
useful ways based on clear,
consistent models that are apparent
and recognizable to users, putting
related things together and
separating unrelated things,
differentiating dissimilar things
and making similar things resemble
one another. The structure principle
is concerned with your overall user
interface architecture.
The simplicity principle: Your
design should make simple, common
tasks simple to do, communicating
clearly and simply in the user’s own
language, and providing good
shortcuts that are meaningfully
related to longer procedures.
The visibility principle: Your
design should keep all needed
options and materials for a given
task visible without distracting the
user with extraneous or redundant
information. Good designs don’t
overwhelm users with too many
alternatives or confuse them with
unneeded information.
The feedback principle: Your design
should keep users informed of
actions or interpretations, changes
of state or condition, and errors or
exceptions that are relevant and of
interest to the user through clear,
concise, and unambiguous language
familiar to users.
The tolerance principle: Your design
should be flexible and tolerant,
reducing the cost of mistakes and
misuse by allowing undoing and
redoing, while also preventing
errors wherever possible by
tolerating varied inputs and
sequences and by interpreting all
reasonable actions reasonable.
The reuse principle: Your design
should reuse internal and external
components and behaviors,
maintaining consistency with purpose
rather than merely arbitrary
consistency, thus reducing the need
for users to rethink and remember.
Try to look for Websites or Web Application which has successfully achieved the goal you are looking to achieve, study their UI's, try to find common parameters & patterns which engages the user on their sites.
I always believe amazon is very good at keeping user engaged on website by showing relevant recommendations, what other people are looking types recommendations, people who bought this also bought this kind of recommendations.
Another good read: What should a developer know about interface design usability and user psychology
Also, Good Read on UI design considerations of e-commerce websites.
When it comes to UI design, ideally you will have an actual visual designer provide some guidance on your use of colors and a UxD provide some insight into your layout and flows based upon their expertise in these areas. Barring these folks having some input, if you design the pages and create the visuals yourself, iterative discovery is the best method to inform your design and provide insight into how these items affect the user and the overall experience you have created.
While there are certainly numerous books you can read and "guidelines" you can follow (and should for the initial design phases), no amount of book learning can replace real user interactions.
Build a functional prototype of your site/app/service/etc. and get it in front of actual users to gauge usability and value. This should be done in an ad-hoc format (versus formal usability testing) and the prototype should consist of smoke and mirrors as needed (i.e. it could be only clickable comps or primarily images with only the flows you're testing actually working).
Once you have some level of prototype, bring it to a place where ppl tend to be (and where you have i-net access if needed). I have found Starbucks to be great for this. Grab some ppl and ask if you can have 10 minutes of their time - you'll find tons of willing participants. Provide these folks with a simple / specific scenario to complete in your prototype and watch and learn.
People in a real-world situation using your software will quickly find its flaws and you'll be learning more than you could ever glean from a book or guideline. You'll be iterating on the design and tweaking items every time you test.
Test like this over a few weeks and you'll be discovering the perfect design very quickly. Once you have something that ppl can use and find value in, you're ready to get it live. But, testing should not end there - once live, you should continue to test and tweak via A/B and multi-variant testing while keeping a close eye on on your analytics and user behavior.
Discovery testing followed continual A/B allows you to continuously tweak, test and learn and ultimately to create the best solution possible.

How to measure software development performance? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am looking after some ways to measure the performance of a software development team. Is it a good idea to use the build tool? We use Hudson as an automatic build tool. I wonder if I can take the information from Hudson reports and obtain from it the progress of each of the programmers.
The main problem with performance metrics like this, is that humans are VERY good at gaming any system that measures their own performance to maximize that exact performance metric - usually at the expense of something else that is valuable.
Lets say we do use the hudson build to gather stats on programmer output. What could you look for, and what would be the unintended side effects of measuring that once programmers are clued onto it?
Lines of code (developers just churn out mountains of boilerplate code, and other needless overengineering, or simply just inline every damn method)
Unit test failures (don't write any unit tests, then they won't fail)
Unit test coverage (write weak tests that exercise the code, but don't really test it properly)
Number of bugs found in their code (don't do any coding, then you won't get bugs)
Number of bugs fixed (choose the easy/trivial bugs to work on)
Actual time to finish a task based against their own estimate (estimate higher to give more room)
And it goes on.
The point is, no matter what you measure, humans (not just programmers) get very good at optimizing to meet exactly that thing.
So how should you look at the performance of your developers? Well, that's hard. And it involves human managers, who are good at understanding people (and the BS they pull), and can look at each person subjectively in the context of who/where/what they are to figure out if they are doing a good job or not.
What you do once you've figured out who is/isn't performing is a whole different question though.
(I can't take credit for this line of thinking. It's originally from Joel Spolsky. Here and here)
Do NOT measure the performance of each individual programmer simply using the build tool. You can measure the team as a whole, sure, or you can certainly measure the progress of each programmer, but you cannot measure their performance with such a tool. Some modules are more complicated than others, some programmers are tasked with other projects, etc. It's not a recommended way of doing this, and it will encourage programmers to write sloppy code so that it looks like they did the most work.
No.
Metrics like that are doomed to failure. Different people work on different parts of the code, on different classes of problem, and absolute measurements are misleading at best.
The way to measure developer performance is to have excellent managers that do their job well, have good specs that accurately reflect requirements, and track everyone's progress carefully against those specs.
It's hard to do right. A software solution won't work.
I think this needs a very careful approach when deciding the ways to measure developers performance as most the traditional methods such as line of codes, number of check ins, number of bugs fixed etc. are proven to be subjective with todays software engineering concepts. We need to value team performance approach rather measuring individual KPIs in a project. However working in commercial development environment its important to keep a track and a close look at following factors of individual developers;
Code review comments – Each project, we can decide the number of code reviews need to be conducted for a given period. Based on the code reviews individuals get remarks about their coding standard improvements. Recurring issues of code reviews of same individual’s code needs to be brought in to attention. You can use automated code reviews tools or manual code reviews.
Test coverage and completeness of tests. – The % covered needs to be decided upfront and if certain developer fails to attempt it often, it needs to be taken care of.
Willingness to sign in to complex tasks and deliver them without much struggle
Achieving what’s defined as “Done” in a user story
Mastery level of each technical area.
With agile approach in some of the projects, the measurements of the development team and the expected performance are decided based on the releases. At each release planning there are different ‘contracts’ negotiated with the team members for the expected performance. I find this approach is more successful as there is no reason of adhering to UI related measurements in a release where there is a complex algorithm to be released.
I would NOT recommend using build tool information as a way to measure the performance / progress of software developers. Some of the confounding problems: possibly one task is considerably harder than another; possibly one task is much more involved in "design space" than "implementation space"; possibly (probably) the more efficient solution is the better solution, but that better solution contributes less lines of code than a terribly inefficient one which provides many many more lines of code; etc.
Speaking of KPI in software developers. www.smartKPIs.com may be a good resource for you. It contains a user friendly library of well-documented performance measures. At the moment it lists over 3300 KPI examples, grouped in 73 functional areas, as well as 83 industries and sub-categories.
KPI examples for the software developers are available on this page www.smartKPIs.com - application development They include but not limited to:
Defects removal efficiency
Data redundancy
In addition to examples of performance measures, www.smartKPIs.com also contains a catalogue of performance reports that illustrate the use of KPIs in practice.
Examples of such reports for information technology are available on: www.smartKPIs.com - KPIs in practice - information technology
The website is updated daily with new content, so check it from time to time for additional content.
Please note that while examples of performance measures are useful to inform decisions, each performance measure needs to be selected and customized based on the objectives and priorities of each organisation.
You would probably do better measuring how well your team tracks to schedules. If a team member (or entire team) is consistantly late, you will need to work with them to improve performance.
Don't short-cut or look for quick and easy ways to measure performance/progress of developers. There are many many factors that affect the output of a developer. I've seen alot of people try various metrics ...
Lines of code produced - encourages developers to churn out inefficient garbage
Complexity measures - encourages over analysis and refactoring
Number of bugs produced - encourages people to seek out really simple tasks and to hate your testers
... the list goes on.
When reviewing a developer you really need to look at how good their work is and define "good" in the context of what the comany needs and what situations/positions the company has put that indivual in. Progress should be evaluated with equal consideration and thought.
There are many different ways of doing this. Entire books written on the subject. You could use reports from Hudson but I think that would lead to misinformation and provide crude results. Really you need to have task tracking methodology.
Check how many lines of the codes each has written.
Then fire the bottom 70%.. NO 90%!... EVERY DAY!
(for the folks that aren't sure, YES, I am joking. Serious answer here)
We get 360 feedback from everyone on the team. If all your team members think you are crap, then you probably are.
There is a common mistake that many businesses make when setting up their release management tool. The Salesforce release management toolkit is one of the best ones available in the market today, but if you do not follow the vital steps of setting it up, you will definitely have some very bad results. You will get to use it but not to its full capacity. Establishing release management processes in isolation from the business processes is one of the worst mistakes to make. Release management tools go hand in hand with the enterprise strategy, objectives, governance, change management plus some other aspects. The processes of release management need to be formed in such a way that everyone in the business is on the same page.
Goals of release management
The main goal of release management is to have a consistent set of reliable and repeatable processes that are resource independent. This enables the achievement of the most favorable business value while at the same time optimizing the utilization of resources available. Considering that most organizations focus on running short, high-yield business projects, it is essential for optimization of the delivery value chain of the application to make certain that there are no holdups in the delivery of the business value.
Take for instance the force.com migration toolkit, as this tool has proven to be great in governance. A release management tool should allow for optimal visibility and accountability in governance.
Processes and release cycles
The release management processes must be consistent for the whole business. It is necessary to have streamlined and standardized processes across the various tool users. This is because they will be using the same platform and resources that enable efficient completion of their tasks. Having different processes for different divisions of your business can lead to grievous failures in tool management. The different sets of users will need to have visibility into what the others are doing. As aforementioned, visibility is of great importance in any business process.
When it comes to the release cycles, it is also imperative to have one centralized system that will track all the requirements of the different sets of users. It is also necessary to have this system centralized so that software development teams get insight into the features and changes requested by the business. Requests have to become priorities to make sure that the business gets to enjoy maximum benefit. Having a steering team is important because it is involved in the reviewing of business requirements plus also prioritizing the most appropriate changes that the business needs to make.
The changes that should happen to the Salesforce system can be very tricky and therefore having a regular meet up between the business and IT is good. This will help to determine the best changes to make to the system that will benefit the business. By considering the cost and value of implementing a feature, the steering committee has the task of deciding on the most important feature changes to make.
Here also good research http://intersog.com/blog/tech-tips/how-to-manage-millennials-on-software-development-teams
This is an old question but still, something you can do is borrow Velocity from Agile Software Development, where you assign a weight to each task and then you calculate how much "weight" you solve in each sprint (or iteration or whatever DLC you use). Of course this comes in hand with the fact that, like a commenter mentioned before, you need to actively keep track yourself of whether your developers are working or chatting online.
If you know your developers are working responsively, then you can rely on that velocity to give you an estimate of how much work the team can do. If at any iteration this number drops (considerably), then either it was poorly estimated or the team worked less.
Ultimately, the use of KPIs together with velocity can give you per-developer (or per-team) insights on performance.
Typically, directly using metrics for performance measurement is considered a Bad Idea, and one of the easy ways to run a team into the ground.
Now, you can use metrics like % of projects completed on-time, % of churn as code goes toward completion, etc...it's a wide field.
Here's an example:
60% of mission-critical bugs were written by Joe. That's a simple, straightforward metric. Fire Joe, right?
But wait, there's more!
Joe is the Senior Developer. He's the only guy trusted to write ultra-reliable code, every time. He's written about 80% of the mission-critical software, because he's the best.
Metrics are a bad measurement of developers.
I would share my experience and how I learnt a very valuable process on measuring the team performance. I must admit, I have fallen on tracking KPI simply because most of the departments would do the same but not really for the insight until I had the responsibility to evaluate developers performance where after a number of reading I evolved with the following solution.
One every project, I would entertain the team in a discussion on the project requirements and involve them so everyone knows what is to be done. In the same discussion through collaboration we would break the projects in to tasks and weight those tasks. Now previously we would estimate the project completion as 100% where each task has a percentage contribution. Well this did work for a while but was not the best solution. Now we would based the task on weight or points to be exact and use relative measurements to compare the task and differentiate the weights for example. There is a requirement to develop a web form to gather user data.
Task would go about like
1. User Interface - 2 Points
2. Database CRUD - 5 Points
3. Validation - 4 Points
4. Design (css) - 3 Points
With this strategy We can pin point a weekly approximate on how much we have completed and what is pending on the task force. We can also be able to pin point who has performed best.
I must admit that I still face some challenges on this strategy such as not every developer is comfortable on every technology. Somehow some are excited to learn technologies simply because they find 2015 high % of points fall in that section some would do what they can.
Remember, do not track a KPI for their own sake, track it for it's insight.

Besides "treat warnings as errors" and fixing memory leaks, what other ideas should we implement as part of our coding standards?

First let me say, I am not a coder but I help manage a coding team. No one on the team has more than about 5 years experience, and most of them have only worked for this company.. So we are flying a bit blind, hence the question.
We are trying to make our software more stable and are looking to implement some "best practices" and coding standards. Recently we started taking this very seriously as we determined that much of the instability in our product could be linked back to the fact that we allowed Warnings to go through without fixing when compiling. We also never bothered to take memory leaks seriously enough.
In reading through this site we are now quickly fixing this problem with our team but it begs the question, what other practices can we implement team wide that will help us?
Edit: We do fairly complex 2D/3D Graphics Software that is cross-platform Mac/Windows in C++.
Typically, the level of precision/exactingness in coding standards/process is directly connected to the safety level required. E.g., if you are working in aerospace, you will tightly control pretty much everything. But, on the other end of the spectrum, if you are working on a computer gaming forum site...if something breaks, no biggie. You can have slop. So YMMV, depending on your field.
The classic book on coding is Code Complete 2nd edition, by Steve McConnell. Have a team copy & strongly recommend your developers purchase it(or have the company get it for them). That will satisfy probably 70% of the stylistic questions. CC addresses the majority of development cases.
edit:
Graphics software, C++, Mac/Windows.
Since you're doing cross-platform work, I would recommend having an automated "compile-on-checkin" process for your Mac(10.4(maybe), 10.5, 10.6), and Windows(XP(maybe), Vista, 7). This ensures your software at the least compiles, and you know when it doesn't.
Your source control(which you are using, I assume), should support branching, and your branching strategy can reflect cross-platformy-ness as well. It's also advantageous to have mainline branches, dev branches, and experimental branches. YMMV; you will probably need to iterate on that and consult with with people who are familiar with configuration management.
Since it's C++, you will probably want to be running Valgrind or similar to know if there is a memory leak. There are some static analyzers which you can get: I don't know how effective they are at the modern C++ idiom. You can also invest in writing some wrappers to help watch memory allocations.
Regarding C++...The books Effective C++, More Effective C++, and Effective STL(all by Scott Meyers) should be on someone's shelf, as well as Modern C++ by Andrescu. You may find Lippman's book on the C++ object model useful as well, I don't know.
HTH.
There are a lot of consultants/companies who have coding rules to sell you, you should have no difficulty finding one. However, one that doesn't first ask you the field you are in (you didn't mention it in your question) is providing you with snake oil.
Test-Driven Development. TDD helps check for logic errors at the development phase.
Get everyone to read and discuss various standards and guidelines. I (as well as Stroustrup) suggest the Joint Strike Fighter coding standards. Ask your developers to classify the guidelines therein among
Already met
Could be met easily (few changes from current condition)
Should work toward in old code and follow in new development
Not worth it
Have the long technical discussions, and settle on a set for the team to adopt.
Code reviews have been shown to provide significant benefits to code quality, even more so than traditional testing. I would suggest getting in the habit of performing routine design and code reviews; the number of stages at which reviews are performed, the formality and detail of the reviews, and the percentage of work subject to review can all be set according to your business requirements. Coding standards can be useful when done right (and if everyone's code looks similar, it is also easier to review), but where you put your braces and how far you indent blocks isn't really going to affect defect rates.
Also, it's worth familiarizing yourself and your peers with the concept of technical debt and working bit by bit to redesign and improve parts of the system as you come in contact with them. However, unless you have comprehensive unit testing and/or processes in place to ensure high code quality, this may not help things.
Given that this is Stack Overflow, someone should reference The Joel Test. I like to automate as much as possible, so using Lint is also a must.
These basics are good for most any industry or team size:
Use Agile methodology (scrum is a good example).
http://www3.software.ibm.com/ibmdl/pub/software/rational/web/whitepapers/2003/rup_bestpractices.pdf
Use Test-driven development. http://www.agiledata.org/essays/tdd.html
Use consistent coding standards. Here is an example document:
http://www.dotnetspider.com/tutorials/BestPractices.aspx
Get your team familiar with good
design patterns.
http://www.dofactory.com/Patterns/Patterns.aspx
You can't go wrong with these basics. Build from there with new team members who have been there and done that. I'd strongly suggest pair programming once you've got those guys on the team. It is the best way to infect people with best practices.
Best of luck to you!
The first thing you need to consider when adding coding standards/best practices is the effect it will have on your team's morale and cohesiveness. Developers usually resent any practices that are imposed on them even if they are good ideas. The people issues have to be addressed for a big change to be successful.
You will need to involve your group in developing the standards and try to achieve consensus. That said, you will never get universal agreement on anything, so you will have to balance consensus and getting to standards. I've seen major fights over something as simple as tabs versus spaces in source.
The best book I've seen for C/C++ guidelines in complicated projects is Large Scale C++ Software Design. That book along with Code Complete (which is a must-read classic) are good starting points.
You don't mention any language, and while it is true that most of coding standards are language independent, it will also help you in your search. On most of the companies I had work they have different coding standards for different programming languages. So my advice will be:
Choose your language
Search the web since there are plenty of standards out there for your language
Gather all the standards you found
Divide your team into groups and give them a few of the documents to analyze. They should come with a list of things they think worthy to have in their new standards.
Have a meeting so each group present its findings to everybody (there will be a lot of redundancy between groups). That should be an open discussion and everybody's opinion should be accounted.
Compile a list of the standards that were selected by the majority of the coders and that should be your starting point.
Perform semi annual reviews of the standards, to add or remove things.
Now, The logic behind this is : Most of the problems from putting a coding standard from scratch is developer's acceptance. Each of us have a way of doing things and it sucks when somebody from the outside believes one way of doing things is better from another. So, if developers understand the logic and the purpose of the coding standards then you have half of the work done. The other thing is that standards should be design and created specifically for your company's needs. There will be some things that will made sense, and some that don't. With the above approach you could discriminate between those. The other thing is that standards should be able to change over time to reflect the company needs, so a coding standard should be a living document.
This blog post describes a lot of the common practices of mediocre programming. These are some of the potential issues you're team is having. It includes a quick explanation of the "best practice" for each one.
One thing you should have rules about is some kind of naming standard. It just makes life easier for people while not being really invasive.
Other than that, I'd have to say it depends on the level of your team. Some need more rules than others. The better people are, the less "support" they need from rules.
If you want a complete set of coding rules to control every little detail, you're going to spend lots of time arguing about rules and exceptions to rules and what you should write rules about. I'd go with something already written instead.
If you are concerned about quality then one thing you could do that really isn't about rules, is:
Automated building and testing. This has helped me a lot. Once you find a problem, it really helps to have an environment where you can write a test to verify the problem. Fix the problem and then easily add your test to an automatic test suite that makes sure that sort of problem can't come back without being spotted.
Then make sure these run often. Preferably every time someone checks something in.
If your framework requires certain rules to function well, put those in your coding standard.
If you decide to have coding standards, you want to be very careful about what you put in. If the document is too long or focuses on arbitrary stylistic details, it will just get ignored and nobody will bother to read it. Often a lot of what goes into coding standards is just the preferences of the person that wrote the document (or some standards that have been copied off the web!). If something is in the standard, it needs to be very clear to the reader how it improves quality and why it is important.
I would argue that a large proportion of what makes code readable is to do with design rather than the layout of the code. I have seen a lot of code that would adhere to the standards but still be difficult to read (really long methods, bad naming etc.) - you can't have everything it the standards, at some point it comes down to how skilled and disciplined your developers are - do what you can to increase their skills.
Perhaps rather than a coding standards document, try to get the team to learn about good design (easier said than done, I know). Make them aware of things like the SOLID principles, how to separate concerns, how to handle exceptions properly. If they design well, the code will be easy to read and it won't matter if there are enough white lines or the curly braces are in the right place.
Get some books about design principles (see a couple of recommendations below). Maybe get the the team to do some workshops to discuss some of the topics. Perhaps get them to collectively write a document on what principles might be important for their project. Whatever you do, make sure it is the team as a whole who decides what the standards / principles are.
http://www.amazon.co.uk/Principles-Patterns-Practices-Robert-Martin/dp/0131857258/
http://www.amazon.co.uk/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882
Don't write your own standards from scratch.
Chances are there are several out there that define what you want already, and are more complete than you could come up with on your own. That said, don't worry too much if you don't agree 100% with it on minor matters, you can swap in some parts of others, or call some infraction of it an warning rather than an error - depending on your own needs. (for example, some standards would throw a warning if the length of a line is more than 80 characters long, I prefer no more than 120 as a hard limit, but would make sure there was a good reason - readability & clarity for example - if there was > 80).
Also, do try to find automated methods of checking your code against the standard - including your own minor changes as required.
Besides books already recommended, I would also mention,
C++ Coding Standards: 101 Rules, Guidelines, and Best Practices by Herb Sutter and Andrei Alexandrescu (Paperback - Nov 4, 2004)
If you're programming on VB.NET, make sure Option Explicit and Option Strict are set to ON. This will save you a lot of grief tracking down mysterious bugs. These can be set at project level so that you never have to remember to to set them in your code files
I really like:
MISRA C standard (it's a little strict tho' but the ideas hold for C++)
and Hi-Integrity's http://www.codingstandard.com/HICPPCM/index.html C++ standard which borrows heavily from MISRA
LDRA (a static analysis tool) uses these standards to grade your work (this I don't use as it's expensive) but I can vouch for running cppcheck as a good 'free/libre' static analysis checker.

How do you test the usability of your user interfaces [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
How do you test the usability of the user interfaces of your applications - be they web or desktop? Do you just throw it all together and then tweak it based on user experience once the application is live? Or do you pass it to a specific usability team for testing prior to release?
We are a small software house, but I am interested in the best practices of how to measure usability.
Any help appreciated.
I like Paul Buchheit's answer on this from startup school. The short version of what he said listen to your users. Listen does not mean obey your users. Take in the data filter out all the bad advice and iteratively clean up the site. Lather, rinse, repeat.
If you are a small shop you probably don't have a team of QA or Usability people or whatever to go through the site. Your users are going to be the ones that actually use the site though. Their feedback can be invaluable.
If something is too hard for one of your users to use or too complex to understand why they should use it, then it might be the same way for 1000 other users. Find a simpler way of accomplishing the same thing.
Once you have gathered all of this feedback and have a list of things to do, do the simplest ones first. That way you have forward moving usability progress.
What I like to do is give someone an install package, ask them to perform a number of tasks related to how the application works, and watch.
Hardest part is to keep your mouth shut.
Some of the best advice on usability testing is available on Jakob Nielsen's Website http://www.useit.com. He advocates what Will mentioned - ask users to perform various tasks on your website or web application and then sit back to see what they do.
Do not interrupt the users by asking questions or guiding them. Just observe them and document their flow. You can also get hardware and software to do eye-tracking and understand what captures the attention of the users.
However, usability should not start from the testing phase. You must have some general idea of what users generally like and do not like when you do development. There are many websites and books outlining generally accepted usability standards and principles.
Normally, we test the usability of new interfaces by asking a small selection of users to try out a beta version.
We give a small amount of instruction as to what the new features/screens are supposed to do and let them dive straight into it. It's very interesting to see where they are looking and clicking. We never demo the new features - we only talk about what it does.
If the UI changes are minimal then they go live and we gather feedback from real users. It's only when we are making big changes that we go through usability tests on beta.
When developing new screens it usually helps a hell of a lot to get a colleague sat in front of the UI and ask them what it does. Which areas do they click on? Where are they looking first? What sections are drawing their attention? etc.
I agree with Adam; using a very computer illiterate person is very helpful. However, what I've run into before with that is the program I want them to try out just isn't "up their alley" as far as something they would ever want to do.
A good way to start is with a paper prototype. Have specific tasks that you want your "user" to perform and have them do it. For more on paper prototyping, start here.
I frequently take any new interface I'm working on to one of our technical support people. They've heard every complaint about interfaces that you could ever imagine, so if anyone is going to think up potential problems, they will.
Also, and I'm not kidding about this, I often take the least computer literate person I know (you're mother is often a good choice...but they have to have used a computer before, otherwise it's going to by pointless) and let them loose on the interface with no instruction. If they can't figure out where things are intuitively, then your GUI likely needs work. Remember, Don't make them think! (yes, I know this is for web design, but it applies)
There are many ways to test the usability of a system. Please check any available literature you can find. I just want to insist that usability test is not so hard as you or anyone might think. In a famous paper called "A mathematical model of the finding of usability problems" in INTERACT'93 and CHI'93, J. Nielsen and T. K. Landauer showed that only five users are enough to find most problems in a small system.
If you have no way to read this paper, try this article in the author's website:
http://www.useit.com/alertbox/20000319.html
Z'been a while since this question was last active but here goes anyways.
From experience :
Always use Objectively measurable to decide if usability is better or not (time to accomplish carefully selected task, inactive time, KLM type metrics) here a key-mouse logger can be a precious ally
Never go too far ahead before consulting and measuring again with your client (do not encage yourself with the paper prototype and emerge with the finish product... that just never works)
read, read, read, try, evolve
Keep things simple and always remember the task at had (why the user needs the interface)
test, test and test again...
Always go to the bottom of the user requests. Although the check box the user request at this particular place may be the best thing to do, it almost always hides a more fundamental flaw
the system user (the one using it... as opposed to the one paying for it) is your best ally, keep him/her on your side
Never be afraid of refactoring your design and evolve your system. Also evolve your metrics and measurements also, however be careful in doing so not to break measurements continuity as it is the best token of objective progress in a VERY subjective world.
recommended reading (other than previously proposed):
Handbook of usability testing Jeff Rubin. A bit extreme but we toyed around an agile version of his approach and found that if we spent 30 minutes a week with users we would get a LOT of useful feedback while not getting swamped with too much info.
keep close watch to the Sneiderman and Nielsen of this world and other that may arrise
As usability inspection goes, there are several viable methods. They require a different amount of resources in regards to persons, analysis and equiptment.
The most common, and easiest to perform is called
Heuristic Evaluation
You basically walk through each screen to check if it conforms to the heuristics set by you, or your customer.
Check this article by Nielsen
Cognitive walkthrough
This method requires you to ask the user to complete steps in the application. You prepare steps for the user to complete. Issues that arrise during this walkthrough is taken into consideration when finishing the application.
Check this paper for details.
Think Aloud Analysis
I have used this method mostly in the early stages of prototyping. I let the user talk freely about the system while it is beeing used. Ask questions about use, design etc. You can get a really nice veiw of the general feeligns of the system, and what features are lacking.
Check this paper for details.
Interaction analysis
This is a more tricky one. I have only used the datagathering teqchniques proposed by this one. This technique takes into account context, activites, body language etc. Interaction analysis is commonly focused on research, not so much in commercial evaluations.
This link takes you to the article.
Keep in mind that these methods take practice to perfect. I would start with HE, continue to CW and THA. And only use Interaction Analysis if you have lots of resources and time.
There are a number of methods to test or evaluate usability of an application. Broken down into qualitative and quantitative methods and based on when you are planning to test.
Further it is categorized based on whether users are involved or experts do the testing.
To name a few methods,
Expert Reviews - user interface or usability experts rate the usability of an interface based on decided heuristics and principles
Formative usability testing - task flows are taken and users are provided with tasks to be completed. Qualitative feedback is collected based on what the users feel the pain points are during the testing. This form of testing is done during the design to provided feedback into the design of the application.
Summative Usability testing - task flows are taken and users are provided with tasks to be completed. The applications performance on efficiency, effectiveness and satisfaction are measured based on users completion of tasks.
The importance difference is whether you engage the user or a expert to tell you the difference in usability. Further on when you do the evaluation - at the end of the project or during the design phases.
I'm a strong believer in what I call 3-martini usability testing. When designing a system, imagine that the person who will be using it has just had 3 martinis.
Before handing over the system to colleagues (other programmers, quality assurance, tech support) or usability testers, an informal test with a couple of friends and a bottle of vodka (outside of work, of course) can often prove instructive.

Resources