any way to wrap Given/Then in a namespace? - ruby

im new to cucumber, and im having a hard time finding good documentation for ruby with lots of examples
#jobdataAPI
Feature: Submitting a toolbox talk request
Scenario: When submitting with first aid kit with NO selected, open an HR ticket
Given first aid kit NO
Then open an HR ticket
Scenario: When submitting with first aid kit fully stocked NO, open an HR ticket
Given first aid kit fully stocked NO
Then open an HR ticket
Scenario: When submitting fire extinguisher NO, open an HR ticket
Given fire extinguisher NO
Then open an HR ticket
I have a scenario where I am going to have multiple "Then" actions with the same name, but I need to wire up step definitions that have individuality for each Scenario
is it possible to write my step definition with a namespace?
this fails, because Scenario does not exist as a method
# frozen_string_literal: true
World JOBDATA
Scenario('When submitting with first aid kit with NO selected, open an HR ticket') do
Before do
end
After do
end
Given('first aid kit NO') do
end
Then('open an HR ticket') do
end
end

Namespacing is not supported by Cucumber. Quite the opposite. Cucumber has the concept of a World. This World represents your business domain. Cucumber (quite rightly) suggests that in this world each unique thing should have a unique name. Without this how can your business know what they and you are talking about.
So you can add things to World, but when you do you have to be very disciplined with your naming to ensure you don't overwrite things.
So there is no place for namespacing in Cucumber and you have no need to namespace to do the simple cuking shown in your examples.
On another point Cucumber is designed so that
Given's setup state
When's perform an action
Then's make assertions on the results of the action
Clearly in your sample feature you are mis-using Then.
Having a feature that explores the effects of different setups on a particular action is a pretty standard thing when cuking. Your feature should have
a preamble, explaining what you are doing and why you are doing it. This gives context to the language you use in your scenarios.
scenario descriptions that match the scenario implementations
scenarios that read well to a business user in their language
There are several anomalies in your current scenario which you should address before trying to do anything technically complex.
You feature talks about Submitting a toolbox talk request but your scenario talk about open an HR ticket
Your Givens don't make sense. What has a first aid kit got to do with any of this.
I'd recommend you back to your feature and try and write it with clarity and simplicity. Think of it as being a document being read by your business owner. Make it clear and obvious, use the preamble to explain the context in which the feature operates. Once you have done that the current problems you have with step definitions should just disappear. If they don't feel free to come back with a revised feature and question ... good luck

Related

How to document undefined behaviour in the Scrum/agile/TDD process [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We're using a semi-agile process at the moment where we still write a design/specification document and update it during the development process.
When we're refining our requirements we often hit edge cases where we decide it's not important to handle it, so we don't write any code for that use case and we don't test it. In the design spec we explicitly state that this scenario is out of scope because the system isn't designed to be used in that way.
In a more fully-fledged agile process, the tests are supposed to act as a specification for the expected behaviour of the system, but how would you record the fact that a certain scenario is explicitly out-of-scope rather than just getting accidentally missed out?
As a bit of clarification, here's the situation I'm trying to avoid: We have discussed a scenario and decided we won't handle it because it doesn't make sense. Then later on, when someone is trying to write the user guide, or give a training session, or a customer calls the help desk, exactly the same scenario comes up, so they ask me how the system handles it, and I think "I remember talking about this a year ago, but there are no tests for it. Maybe it got missed of the plan, or maybe we decided it wasn't a sensible use-case, or maybe there's a subtle reason why you can't actually ever get into that situation", so I have to try and search old skype chats or emails to find out the answer. What I want to achieve is to make sure we have a record of why we decided not to support that scenario so that I can refer back to it in the future. At the moment I put this in the spec where everyone can see it.
I would document deliberately unsupported use cases/stories/requirements/features in your test files, which are much more likely to be regularly consulted, updated, etc. than specifications would be. I'd document each unsupported feature in the highest-level test file in which it was appropriate to discuss that feature. If it was an entire use case, I'd document it in an acceptance test (e.g. a Cucumber feature file or RSpec feature spec); if it was a detail I might document it in a unit test.
By "document" I mean that I'd write a test if I could. If not, I'd just comment. Which one would depend on the feature:
For features that a user might expect to be there, but for which there is no way for the user to access (e.g. a link or menu item that simply isn't present), I'd write a comment in the appropriate acceptance test file, next to the tests of the related features that do exist.
Side note: Some testing tools (e.g. Cucumber and RSpec) also allow you to have scenarios or examples in feature or spec files which aren't actually run, so you can use them like comments. I'd only do that if those disabled scenarios/examples didn't result in messages when you ran the tests that might make someone think that something was broken or unfinished. For example, RSpec's pending/skip loudly announces that there is work left to be done, so it would probably be annoying to use it for cases that were never meant to be implemented.
For situations that you decided not to handle, but which an inquisitive user might get themselves into anyway (e.g. entering an invalid value into a field or editing a URL to access a page for which they don't have permission), don't just ignore them, handle them in a clean if minimal way: quietly clear the invalid value, redirect the user to the home page, etc. Document this behavior in tests, perhaps with a comment explaining why you aren't doing anything even more helpful. It's not a lot of extra work, and it's a lot better than showing the user an error page or other alarming behavior.
For situations like the previous, but that you for some reason decided not to or couldn't find a way to handle at all, you can still write a test that documents the situation, for example that entering some invalid value into a form results in an HTTP 500.
If you would like to write a test, but for some reason you just can't, there are always comments -- again, in the appropriate test file near tests of related things that are implemented.
You should never test undefined behavior, by ...definition. The moment you test a behavior, you are defining it.
In practice, either it's valuable to handle a hedge case or it isn't. If it is, then there should be a user story for it, which acts as documentation for that edge case. What you don't want to have is an old user story documenting a future behavior, so it's probably not advisable to document undefined behavior in stories that don't handle it.
More in general, agile development always works iteratively. Edge case discovery is part of iterative work: with work comes increased knowledge, with increased knowledge comes more work. It is important to capture these discoveries in new stories, instead of trying to handle everything in one go.
For example. suppose we're developing Stack Overflow and we're doing this story:
As a user I want to search questions so that I can find them
The team develops a simple question search and discovers that we need to handle closed questions... we hadn't thought of that! So we simply don't handle them (whatever the simplest to implement behavior is). Notice that the story doesn't document anything about closed questions in the results. We then add a new story
As a user I want to specifically search closed questions so that I can find more results
We develop this story, and find more edge cases, which are then more stories, etc.
In the design spec we explicitly state that this scenario is out of scope because the system isn't designed to be used in that way
Having undocumented functionality in your product really is a bad practice.
If your development team followed BDD/TDD techniques they should (note emphasis) reduce the likelihood of this happening. If you found this edge-case then what makes you think your customer won't? Having an untested and unexpected feature in your product could compromise the stability of your product.
I'd suggest that if an undocumented feature is found:
Find out how it was introduced (common reason: a developer thought it might be a good feature to have as it might be useful in the future and they didn't want to throw away work they produced!)
Discuss the feature with your Business Analysts and Product owner. Find out if they want such a feature in your product. If they do, great, document and test it. If they don', remove it as it could be a liability.
You also had a question regarding the tracking of the outcome of these edge-case scenarios:
What I want to achieve is to make sure we have a record of why we decided not to support that scenario so that I can refer back to it in the future.
As you are writing a design/specification document, one approach you could take is to version that document. Then, when a feature/scenario is taken out you can note within a version change section in your document why the change was made. You can then refer to this change history at a later date.
However I'd recommend using a planning board to keep track of your user stories. Using such a board you could write a note on the card (virtual/physical) explaining why the feature was dropped which also could be referred to at a later date.

Stories and Scenarios that implies UI

I am trying to learn how to use BDD for our development process and I sometimes end-up writing things that implies a UI design, so for brand new development or new features, the UI does not always exists.
For example, if I say this in a scenario "When a column header is clicked" it implies that this feature is based on some sort of table or grid, but at this point we are still just writing user-stories so there is no UI yet.
That gets me confused to know at what point in the process do we come up with a UI design ?
Keep in mind, I only have read articles about BDD and I think it would help our team a lot but still very new at this! Thx!
If you write your scenarios with a focus on the capabilities of the system, you'll be able to refactor the underlying steps within those scenarios more easily. It keeps them flexible. So I'd ask - what does clicking the column get for you? Are you selecting something? What are you going to do with the selection? Are you searching for something and sorting by a value?
I like to see scenarios which say things like:
When I look for the entry
When I go to the diary for January
When I look at the newest entries
When I look at the same T-shirt in black
These could all involve clicking on a column header, but the implementation detail doesn't matter. It's the capability of the system.
Beneath these high-level scenarios and steps I like to create a screen or page with the smaller steps like clicking buttons in it. This makes it easy to refactor.
I wrote this in a DSL rather than English, but it works with the same idea - you can't tell from the steps whether it's a GUI or a web page, and some of the steps involve multiple UI actions:
http://code.google.com/p/wipflash/source/browse/Example.PetShop.Scenarios/PetRegistrationAndPurchase.cs
Hope you find it interesting and maybe it helps. Good luck!
I guess you can write around that by saying "when I sort the information by X, then..." But then you would have to adjust your scenario to remove any mention of the data being displayed in a grid format, which could lead to some rather obtuse writing.
I think it's a good idea to start with UI design as soon as you possibly can. In the case you mentioned above, I think it would be perfectly valid to augment the user story with sketch of the relevant UI as you would imagine it, and then refine it as you go along. A pencil sketch on a piece of paper should be fine. Or you could use a tablet and SketchBook Pro if you want something all digital.
My point is that I don't see a real reason for the UI design to be left out of user stories. You probably already know that you're going to build a Windows, WPF, or Web application. And it's safe to assume that when you want to display tabular data, you'll be using a grid. Keeping these assumptions out of the requirements obfuscates them without adding any real value.
User stories benefit from the fact, that you describe concrete interactions and once you know concrete data and behaviour of the system for it, you might as well add more information about the way you interact. This allows you to use some tools like Cucumber, which with Selenium enables you to translate a story to a test. You might go even further and e.g. for web apps capture all pages you start concrete story at and collect all interactions with that page resulting in some sort of information architecture you might use for documentation or prototyping and later UI testing.
On the other hand, this makes your stories somewhat brittle when it comes to UI changes. I think the agile way of thinking about this is same as when it comes to design changes - do not design for the future, do the simplest possible thing, in the future you might need to change it anyway.
If you stripped your user stories of all concrete things (even inputs) you will end up with use cases(at least in their simplest format, depends on how you write your stories). Use cases are in this respect not brittle at all, they specify only goals. This makes them resistant to change, but its harder to transfer information automatically using tools.
As for the process, RUP/UP derives UI from use cases, but I think agile is in its nature incremental (I will not say iterative, this would exclude agile methods like FDD and Kanban). This means, as you implement new story, you add to your UI what is necessary. This only makes adding UI specifics in stories more reasonable. The problem is, that this is not a very good way to create UI or more generally UX(user experience). This is exactly what one might call a weakpoint of agile. The Agile manifesto concentrates on functional software, but that is it. There are as far as I know no agile techniques for designing UI or UX.
I think you just need to step back a bit.
BAD: When I click the column header, the rows get sorted by the column I clicked.
GOOD: Then I sort the rows by name, or sometimes by ZIP code if the name is very common, like "Smith".
A user story / workflow is a sequence of what the user wants to achieve, not a sequence of actions how he achieves that. You are collecting the What's so you can determine the best How's for all users and use cases.
Looking at a singular aspect of your post:
if I say this in a scenario "When a column header is clicked" it implies that this feature is based on some sort of table or grid, but at this point we are still just writing user-stories so there is no UI yet.
If this came from a user, not from you, it would show a hidden expectation that there actually is a table or grid with column headers. Even coming from you it's not entirely without value, as you might be a user, too. It might be short-sighted, thinking of a grid just because it comes from an SQL query, or it might be spot-on because it's the presentation you expect the data in. A creative UI isnÄt a bad thing as such, but ignoring user expectations is.

Generating UI from DB - the good, the bad and the ugly?

I've read a statement somewhere that generating UI automatically from DB layout (or business objects, or whatever other business layer) is a bad idea. I can also imagine a few good challenges that one would have to face in order to make something like this.
However I have not seen (nor could find) any examples of people attempting it. Thus I'm wondering - is it really that bad? It's definately not easy, but can it be done with any measure success? What are the major obstacles? It would be great to see some examples of successes and failures.
To clarify - with "generating UI automatically" I mean that the all forms with all their controls are generated completely automatically (at runtime or compile time), based perhaps on some hints in metadata on how the data should be represented. This is in contrast to designing forms by hand (as most people do).
Added: Found this somewhat related question
Added 2: OK, it seems that one way this can get pretty fair results is if enough presentation-related metadata is available. For this approach, how much would be "enough", and would it be any less work than designing the form manually? Does it also provide greater flexibility for future changes?
We had a project which would generate the database tables/stored proc as well as the UI from business classes. It was done in .NET and we used a lot of Custom Attributes on the classes and properties to make it behave how we wanted it to. It worked great though and if you manage to follow your design you can create customizations of your software really easily. We also did have a way of putting in "custom" user controls for some very exceptional cases.
All in all it worked out well for us. Unfortunately it is a sold banking product and there is no available source.
it's ok for something tiny where all you need is a utilitarian method to get the data in.
for anything resembling a real application though, it's a terrible idea. what makes for a good UI is the humanisation factor, the bits you tweak to ensure that this machine reacts well to a person's touch.
you just can't get that when your interface is generated mechanically.... well maybe with something approaching AI. :)
edit - to clarify: UI generated from code/db is fine as a starting point, it's just a rubbish end point.
hey this is not difficult to achieve at all and its not a bad idea at all. it all depends on your project needs. a lot of software products (mind you not projects but products) depend upon this model - so they dont have to rewrite their code / ui logic for different client needs. clients can customize their ui the way they want to using a designer form in the admin system
i have used xml for preserving meta data for this sort of stuff. some of the attributes which i saved for every field were:
friendlyname (label caption)
haspredefinedvalues (yes for drop
down list / multi check box list)
multiselect (if yes then check box
list, if no then drop down list)
datatype
maxlength
required
minvalue
maxvalue
regularexpression
enabled (to show or not to show)
sortkey (order on the web form)
regarding positioning - i did not care much and simply generate table tr td tags 1 below the other - however if you want to implement this as well, you can have 1 more attribute called CssClass where you can define ui specific properties (look and feel, positioning, etc) here
UPDATE: also note a lot of ecommerce products follow this kind of dynamic ui when you want to enter product information - as their clients can be selling everything under the sun from furniture to sex toys ;-) so instead of rewriting their code for every different industry they simply let their clients enter meta data for product attributes via an admin form :-)
i would also recommend you to look at Entity-attribute-value model - it has its own pros and cons but i feel it can be used quite well with your requirements.
In my Opinion there some things you should think about:
Does the customer need a function to customize his UI?
Are there a lot of different attributes or elements?
Is the effort of creating such an "rendering engine" worth it?
Okay, i think that its pretty obvious why you should think about these. It really depends on your project if that kind of model makes sense...
If you want to create some a lot of forms that can be customized at runtime then this model could be pretty uselful. Also, if you need to do a lot of smaller tools and you use this as some kind of "engine" then this effort could be worth it because you can save a lot of time.
With that kind of "rendering engine" you could automatically add error reportings, check the values or add other things that are always build up with the same pattern. But if you have too many of this things, elements or attributes then the performance can go down rapidly.
Another things that becomes interesting in bigger projects is, that changes that have to occur in each form just have to be made in the engine, not in each form. This could save A LOT of time if there is a bug in the finished application.
In our company we use a similar model for an interface generator between cash-software (right now i cant remember the right word for it...) and our application, just that it doesnt create an UI, but an output file for one of the applications.
We use XML to define the structure and how the values need to be converted and so on..
I would say that in most cases the data is not suitable for UI generation. That's why you almost always put a a layer of logic in between to interpret the DB information to the user. Another thing is that when you generate the UI from DB you will end up displaying the inner workings of the system, something that you normally don't want to do.
But it depends on where the DB came from. If it was created to exactly reflect what the users goals of the system is. If the users mental model of what the application should help them with is stored in the DB. Then it might just work. But then you have to start at the users end. If not I suggest you don't go that way.
Can you look on your problem from application architecture perspective? I see you as another database terrorist – trying to solve all by writing stored procedures. Why having UI at all? Try do it in DB script. In effect of such approach – on what composite system you will end up? When system serves different businesses – try modularization, selectively discovered components, restrict sharing references. UI shall be replaceable, independent from business layer. When storing so much data in DB – there is hard dependency of UI – system becomes monolith. How you implement MVVM pattern in scenario when UI is generated? Designers like Blend are containing lots of features, which cannot be replaced by most futuristic UI generator – unless – your development platform is Notepad only.
There is a hybrid approach where forms and all are described in a database to ensure consistency server side, which is then compiled to ensure efficiency client side on deploy.
A real-life example is the enterprise software MS Dynamics AX.
It has a 'Data' database and a 'Model' database.
The 'Model' stores forms, classes, jobs and every artefact the application needs to run.
Deploying the new software structure used to be to dump the model database and initiate a CIL compile (CIL for common intermediate language, something used by Microsoft in .net)
This way is suitable for enterprise-wide software and can handle large customizations. But keep in mind that this approach sets a framework that should be well understood by whoever gonna maintain and customize the application later.
I did this (in PHP / MySQL) to automatically generate sections of a CMS that I was building for a client. It worked OK my main problem was that the code that generates the forms became very opaque and difficult to understand therefore difficult to reuse and modify so I did not reuse it.
Note that the tables followed strict conventions such as naming, etc. which made it possible for the UI to expect particular columns and infer information about the naming of the columns and tables. There is a need for meta information to help the UI display the data.
Generally it can work however the thing is if your UI just mirrors the database then maybe there is lots of room to improve. A good UI should do much more than mirror a database, it should be built around human interaction patterns and preferences, not around the database structure.
So basically if you want to be cheap and do a quick-and-dirty interface which mirrors your DB then go for it. The main challenge would be to find good quality code that can do this or write it yourself.
From my perspective, it was always a problem to change edit forms when a very simple change was needed in a table structure.
I always had the feeling we have to spend too much time on rewriting the CRUD forms instead of developing the useful stuff, like processing / reporting / analyzing data, giving alerts for decisions etc...
For this reason, I made long time ago a code generator. So, it become easier to re-generate the forms with a simple restriction: to keep the CSS classes names. Simply like this!
UI was always based on a very "standard" code, controlled by a custom CSS.
Whenever I needed to change database structure, so update an edit form, I had to re-generate the code and redeploy.
One disadvantage I noticed was about the changes (customizations, improvements etc.) done on the previous generated code, which are lost when you re-generate it.
But anyway, the advantage of having a lot of work done by the code-generator was great!
I initially did it for the 2000s Microsoft ASP (Active Server Pages) & Microsoft SQL Server... so, when that technology was replaced by .NET, my code-generator become obsoleted.
I made something similar for PHP but I never finished it...
Anyway, from small experiments I found that generating code ON THE FLY can be way more helpful (and this approach does not exclude the SAVED generated code): no worries about changing database etc.
So, the next step was to create something that I am very proud to show here, and I think it is one nice resolution for the issue raised in this thread.
I would start with applicable use cases: https://data-seed.tech/usecases.php.
I worked to add details on how to use, but if something is still missing please let me know here!
You can change database structure, and with no line of code you can start edit data, and more like this, you have available an API for CRUD operations.
I am still a fan of the "code-generator" approach, and I think it is just a flavor of using XML/XSLT that I used for DATA-SEED. I plan to add code-generator functionalities.

Best Practices & Principles for GUI design [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
What is your best practical user-friendly user-interface design or principle?
Please submit those practices that you find actually makes things really useful - no matter what - if it works for your users, share it!
Summary/Collation
Principles
KISS.
Be clear and specific in what an option will achieve: for example, use verbs that indicate the action that will follow on a choice (see: Impl. 1).
Use obvious default actions appropriate to what the user needs/wants to achieve.
Fit the appearance and behavior of the UI to the environment/process/audience: stand-alone application, web-page, portable, scientific analysis, flash-game, professionals/children, ...
Reduce the learning curve of a new user.
Rather than disabling or hiding options, consider giving a helpful message where the user can have alternatives, but only where those alternatives exist. If no alternatives are available, its better to disable the option - which visually then states that the option is not available - do not hide the unavailable options, rather explain in a mouse-over popup why it is disabled.
Stay consistent and conform to practices, and placement of controls, as is implemented in widely-used successful applications.
Lead the expectations of the user and let your program behave according to those expectations.
Stick to the vocabulary and knowledge of the user and do not use programmer/implementation terminology.
Follow basic design principles: contrast (obviousness), repetition (consistency), alignment (appearance), and proximity (grouping).
Implementation
(See answer by paiNie) "Try to use verbs in your dialog boxes."
Allow/implement undo and redo.
References
Windows Vista User Experience Guidelines [http://msdn.microsoft.com/en-us/library/aa511258.aspx]
Dutch websites - "Drempelvrij" guidelines [http://www.drempelvrij.nl/richtlijnen]
Web Content Accessibility Guidelines (WCAG 1.0) [http://www.w3.org/TR/WCAG10/]
Consistence [http://www.amazon.com/Design-Everyday-Things-Donald-Norman/dp/0385267746]
Don't make me Think [http://www.amazon.com/Dont-Make-Me-Think-Usability/dp/0321344758/ref=pdbbssr_1?ie=UTF8&s=books&qid=1221726383&sr=8-1]
Be powerful and simple [http://msdn.microsoft.com/en-us/library/aa511332.aspx]
Gestalt design laws [http://www.squidoo.com/gestaltlaws]
I test my GUI against my grandma.
Try to use verbs in your dialog boxes.
It means use
instead of
Follow basic design principles
Contrast - Make things that are different look different
Repetition - Repeat the same style in a screen and for other screens
Alignment - Line screen elements up! Yes, that includes text, images, controls and labels.
Proximity - Group related elements together. A set of input fields to enter an address should be grouped together and be distinct from the group of input fields to enter credit card info. This is basic Gestalt Design Laws.
Never ask "Are you sure?". Just allow unlimited, reliable undo/redo.
Try to think about what your user wants to achieve instead of what the requirements are.
The user will enter your system and use it to achieve a goal. When you open up calc you need to make a simple fast calculation 90% of the time so that's why by default it is set to simple mode.
So don't think about what the application must do but think about the user which will be doing it, probably bored, and try to design based on what his intentions are, try to make his life easier.
If you're doing anything for the web, or any front-facing software application for that matter, you really owe it to yourself to read...
Don't make me think - Steve Krug
Breadcrumbs in webapps:
Tell -> The -> User -> Where -> She -> Is in the system
This is pretty hard to do in "dynamic" systems with multiple paths to the same data, but it often helps navigate the system.
I try to adapt to the environment.
When developing for an Windows application, I use the Windows Vista User Experience Guidelines but when I'm developing an web application I use the appropriate guidelines, because I develop Dutch websites I use the "Drempelvrij" guidelines which are based on the Web Content Accessibility Guidelines (WCAG 1.0) by the World Wide Web Consortium (W3C).
The reason I do this is to reduce the learning curve of a new user.
I would recommend to get a good solid understanding of GUI design by reading the book The Design of Everyday Things. Although the main printable is a comment from Joel Spolsky: When the behavior of the application differs to what the user expects to happen then you have a problem with your graphical user interface.
The best example is, when somebody swaps around the OK and Cancel button on some web sites. The user expects the OK button to be on the left, and the Cancel button to be on the right. So in short, when the application behavior differs to what the user expects what to happen then you have a user interface design problem.
Although, the best advice, in no matter what design or design pattern you follow, is to keep the design and conventions consistent throughout the application.
Avoid asking the user to make choices whenever you can (i.e. don't create a fork with a configuration dialog!)
For every option and every message box, ask yourself: can I instead come up with some reasonable default behavior that
makes sense?
does not get in the user's way?
is easy enough to learn that it costs little to the user that I impose this on him?
I can use my Palm handheld as an example: the settings are really minimalistic, and I'm quite happy with that. The basic applications are well designed enough that I can simply use them without feeling the need for tweaking. Ok, there are some things I can't do, and in fact I sort of had to adapt myself to the tool (instead of the opposite), but in the end this really makes my life easier.
This website is another example: you can't configure anything, and yet I find it really nice to use.
Reasonable defaults can be hard to figure out, and simple usability tests can provide a lot of clues to help you with that.
Show the interface to a sample of users. Ask them to perform a typical task. Watch for their mistakes. Listen to their comments. Make changes and repeat.
The Design of Everyday Things - Donald Norman
A canon of design lore and the basis of many HCI courses at universities around the world. You won't design a great GUI in five minutes with a few comments from a web forum, but some principles will get your thinking pointed the right way.
--
MC
When constructing error messages make the error message be
the answers to these 3 questions (in that order):
What happened?
Why did it happen?
What can be done about it?
This is from "Human Interface Guidelines: The Apple Desktop
Interface" (1987, ISBN 0-201-17753-6), but it can be used
for any error message anywhere.
There is an updated version for Mac OS X.
The Microsoft page
User Interface Messages
says the same thing: "... in the case of an error message,
you should include the issue, the cause, and the user action
to correct the problem."
Also include any information that is known by the program,
not just some fixed string. E.g. for the "Why did it happen" part of the error message use "Raw spectrum file
L:\refDataForMascotParser\TripleEncoding\Q1LCMS190203_01Doub
leArg.wiff does not exist" instead of just "File does
not exist".
Contrast this with the infamous error message: "An error
happend.".
(Stolen from Joel :o) )
Don't disable/remove options - rather give a helpful message when the user click/select it.
As my data structure professor pointed today: Give instructions on how your program works to the average user. We programmers often think we're pretty logical with our programs, but the average user probably won't know what to do.
Use discreet/simple animated features to create seamless transitions from one section the the other. This helps the user to create a mental map of navigation/structure.
Use short (one word if possible) titles on the buttons that describe clearly the essence of the action.
Use semantic zooming where possible (a good example is how zooming works on Google/Bing maps, where more information is visible when you focus on an area).
Create at least two ways to navigate: Vertical and horizontal. Vertical when you navigate between different sections and horizontal when you navigate within the contents of the section or subsection.
Always keep the main options nodes of your structure visible (where the size of the screen and the type of device allows it).
When you go deep into the structure always keep a visible hint (i.e. such as in the form of a path) indicating where you are.
Hide elements when you want the user to focus on data (such as reading an article or viewing a project). - however beware of point #5 and #4.
Be Powerful and Simple
Oh, and hire a designer / learn design skills. :)
With GUIs, standards are kind of platform specific. E.g. While developing GUI in Eclipse this link provides decent guideline.
I've read most of the above and one thing that I'm not seeing mentioned:
If users are meant to use the interface ONCE, showing only what they need to use if possible is great.
If the user interface is going to be used repeatedly by the same user, but maybe not very often, disabling controls is better than hiding them: the user interface changing and hidden features not being obvious (or remembered) by an occasional user is frustrating to the user.
If the user interface is going to be used VERY REGULARLY by the same user (and there is not a lot of turnover in the job i.e. not a lot of new users coming online all the time) disabling controls is absolutely helpful and the user will become accustomed to the reasons why things happen but preventing them from using controls accidentally in improper contexts appreciated and prevents errors.
Just my opinion, but it all goes back to understanding your user profile, not just what a single user session might entail.

Documents for a project? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I work for a CMMI level 5 certified company and one thing I hate about is the amount of documents we prepare (As a programmer I already hate documents). We have lots and lots of documents like PID(project initiation doc), Business requirements, System requirements,tech spec, Code review checklist, issue logs, Defect logs, Configuration management plan, Configuration management check list(s), Release documents and lots...
Almost 90% of these docs are just done for the sake of QA audit :) .. What do you think are the most important documents for a project? What documents can be used in the long run by another developer?
Please share your good practices here. I would like to use them for my own projects or the company I am planning to start in the long run.
Thanks
The key document is a good functional spec. There should be one and only one reference document for a system.
Overdoing documentation proliferates a large number of small requirements and spec documents every time someone changes a system or interface. For a system of any complexity, before long you have your spec distributed around several hundred assorted word, excel, visio and even powerpoint files. When this happens you lose clarity about what is current or even whether you have located and identified all pertinent documentation.
The BRD-SRD-Tech spec progression is based on an assumption that the business signs off the BRD, a business analyst signs off the SRD against requirements documented in the BRD and the technical specification is signed off against the SRD. This generates a web of sign-offs, multiple documents with redundant information and makes it difficult and clumsy to keep the spec documents up to date.
Because of this, subsequent requirements documentatation tends to take the form of a series of change request and supplemental requirement and spec docs, each with their own sign-off and audit process. You gain CYA and audit trail (or at least the appearance of an audit trail), but you lose clarity. There is now no definitive reference document for the system and it is difficult to establish what is current or relevant to any particular activity. The net result is that your business analysis process gets bogged down in forensic research, which adds overheads and latency to delivery schedules.
A spec document should be built in such a way that there is one definitive reference for any given system or subsystem. The document should be kept up to date and versioned. Get a good technical documentation tool like Framemaker, so your process can scale, and the document has some structural integrity of the sort lacking on Word.
For me the only real document I ever use is a spec. The more detail the better. However it doesnt need to be all completed at one time, and it doesnt need to be particularly formal. What is far more useful to me than documents that are checked and signed and double checked and double signed is always being able to get the latest version of a document. And being able to talk to people about what they have written, and get a decision in the case of any ambiguity. this is far more useful to me than anything else.
To sum up: a spec is the only document I have ever found useful, however it pales in comparison to having a project manager who knows the proposed system inside out, and can make sensible decisions based on what they know.
Documentation is like tofu -- most people hate it until they realize that under the right conditions, it can be really good.
The problem is that what you consider documentation is mostly made for documentation's sake. You, as a developer, don't see any immediate value in the documents you produce because you know you can do your job without all the TPS reports which you're required to make.
Unfortunately, I'm going to wager that there's not a lot you can do about in a company where you're being forced to eat raw tofu all the time. You'll probably just have to suck it up and write the docs which your company requires, but you can at least do one thing... you can write documents which at least are useful to you, and you can pass them along with your code for others who will maintain it.
Aside from inline documentation, you could set up a wiki to be used by yourself and people on your team. This type of documentation is searchable, which is already a big plus to developers, plus it's more of a living document instead of a homework-like paper you had to write. You already post to SO, so just think of your documentation as pooling your knowledge in a more useful place.
What do you think are the most important documents for a project?
Different people have different needs: for example the documents which the owner needs (e.g. the business contract) aren't the same as the documents which QA needs.
What documents can be used in the long run by another developer?
IMO the most important document (except for the source code) is the functional specification: because what the software is supposed to do (as opposed to, what it is doing) is the one thing that can't necessarily be reverse-engineered. See also How does a good developer keep from creating code with a low bus hit factor?
User Stories, burndown chart, code
I'm a fan of the old 4+1 views:
Use Case view (a/k/a user stories). There are several forms: proper use cases, forward-looking use cases that aren't as well defined and epics which need to be decomposed.
Logical view. The "static" view. UML Class diagrams and the like work well here as a design document. This also includes request and response formats for various protocols. Here is where we document the RESTful requests and responses. This includes the REST URI design.
Process view. The "dynamic" view. UML activity diagrams, sequence diagrams and statecharts and the like for here for design documents. In some cases, simple narratives work well. In other cases, there's a State design pattern, and it requires a combination of class diagrams and statecharts to show how the stateful objects interact.
This also includes protocols (e.g. REST). Here is where we define any special processing for the various REST requests.
This also includes an authentication or authorization rules, and any other cross-cutting aspects like security, logging, etc.
Component view. The pieces we're building for deployment. This includes the stuff we depend on, the structure of the modules and packages, etc. This is often a simple component diagram or a list of components and their dependencies.
Deployment view. We try to generate this from the code as deployed. Since we're using Python, we use epydoc to create the API documentation. We also use Sphinx to import module documentation into this view of the software.
This also includes the parameters, settings, and configuration details.
This, however, isn't sufficient.
When projects start, you have to work up to this through a series of sprints.
The first sprints build just the use case view.
Subsequent sprints build an "architecture" to implement the use cases. The architecture document has 4+1 views, but at a high level of abstraction. It summarizes the structure of the model schemas, the requests and replies, the RESTful processing, other processing, the expected componentry, etc. It never has a Deployment view. We generally reference operator guide and API documents as the deployment view of an architecture.
Then design-and-construction sprints build (and update) detailed 4+1 view documents for various components.
Then release sprints build (and update) the deployment views.
From the project point of view, the most important documents are those that normally include the word Plan, such as the Project Plan, Configuration Management Plan, Quality Plan, etc.
What you are describing is common in process improvements, and normally responds to two major causes. One is that the system really is overeaching and getting in the way of real work being done. Another is actually answered in your question: it is not that the documents are only done for the sake of audits, and your focus should not just be how usefull is the doc for other developers, but for the project or the company as a whole.
One usually looks at things from it's own perspective, sometimes it's necessary to look at the general picture.

Resources