Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We're using a semi-agile process at the moment where we still write a design/specification document and update it during the development process.
When we're refining our requirements we often hit edge cases where we decide it's not important to handle it, so we don't write any code for that use case and we don't test it. In the design spec we explicitly state that this scenario is out of scope because the system isn't designed to be used in that way.
In a more fully-fledged agile process, the tests are supposed to act as a specification for the expected behaviour of the system, but how would you record the fact that a certain scenario is explicitly out-of-scope rather than just getting accidentally missed out?
As a bit of clarification, here's the situation I'm trying to avoid: We have discussed a scenario and decided we won't handle it because it doesn't make sense. Then later on, when someone is trying to write the user guide, or give a training session, or a customer calls the help desk, exactly the same scenario comes up, so they ask me how the system handles it, and I think "I remember talking about this a year ago, but there are no tests for it. Maybe it got missed of the plan, or maybe we decided it wasn't a sensible use-case, or maybe there's a subtle reason why you can't actually ever get into that situation", so I have to try and search old skype chats or emails to find out the answer. What I want to achieve is to make sure we have a record of why we decided not to support that scenario so that I can refer back to it in the future. At the moment I put this in the spec where everyone can see it.
I would document deliberately unsupported use cases/stories/requirements/features in your test files, which are much more likely to be regularly consulted, updated, etc. than specifications would be. I'd document each unsupported feature in the highest-level test file in which it was appropriate to discuss that feature. If it was an entire use case, I'd document it in an acceptance test (e.g. a Cucumber feature file or RSpec feature spec); if it was a detail I might document it in a unit test.
By "document" I mean that I'd write a test if I could. If not, I'd just comment. Which one would depend on the feature:
For features that a user might expect to be there, but for which there is no way for the user to access (e.g. a link or menu item that simply isn't present), I'd write a comment in the appropriate acceptance test file, next to the tests of the related features that do exist.
Side note: Some testing tools (e.g. Cucumber and RSpec) also allow you to have scenarios or examples in feature or spec files which aren't actually run, so you can use them like comments. I'd only do that if those disabled scenarios/examples didn't result in messages when you ran the tests that might make someone think that something was broken or unfinished. For example, RSpec's pending/skip loudly announces that there is work left to be done, so it would probably be annoying to use it for cases that were never meant to be implemented.
For situations that you decided not to handle, but which an inquisitive user might get themselves into anyway (e.g. entering an invalid value into a field or editing a URL to access a page for which they don't have permission), don't just ignore them, handle them in a clean if minimal way: quietly clear the invalid value, redirect the user to the home page, etc. Document this behavior in tests, perhaps with a comment explaining why you aren't doing anything even more helpful. It's not a lot of extra work, and it's a lot better than showing the user an error page or other alarming behavior.
For situations like the previous, but that you for some reason decided not to or couldn't find a way to handle at all, you can still write a test that documents the situation, for example that entering some invalid value into a form results in an HTTP 500.
If you would like to write a test, but for some reason you just can't, there are always comments -- again, in the appropriate test file near tests of related things that are implemented.
You should never test undefined behavior, by ...definition. The moment you test a behavior, you are defining it.
In practice, either it's valuable to handle a hedge case or it isn't. If it is, then there should be a user story for it, which acts as documentation for that edge case. What you don't want to have is an old user story documenting a future behavior, so it's probably not advisable to document undefined behavior in stories that don't handle it.
More in general, agile development always works iteratively. Edge case discovery is part of iterative work: with work comes increased knowledge, with increased knowledge comes more work. It is important to capture these discoveries in new stories, instead of trying to handle everything in one go.
For example. suppose we're developing Stack Overflow and we're doing this story:
As a user I want to search questions so that I can find them
The team develops a simple question search and discovers that we need to handle closed questions... we hadn't thought of that! So we simply don't handle them (whatever the simplest to implement behavior is). Notice that the story doesn't document anything about closed questions in the results. We then add a new story
As a user I want to specifically search closed questions so that I can find more results
We develop this story, and find more edge cases, which are then more stories, etc.
In the design spec we explicitly state that this scenario is out of scope because the system isn't designed to be used in that way
Having undocumented functionality in your product really is a bad practice.
If your development team followed BDD/TDD techniques they should (note emphasis) reduce the likelihood of this happening. If you found this edge-case then what makes you think your customer won't? Having an untested and unexpected feature in your product could compromise the stability of your product.
I'd suggest that if an undocumented feature is found:
Find out how it was introduced (common reason: a developer thought it might be a good feature to have as it might be useful in the future and they didn't want to throw away work they produced!)
Discuss the feature with your Business Analysts and Product owner. Find out if they want such a feature in your product. If they do, great, document and test it. If they don', remove it as it could be a liability.
You also had a question regarding the tracking of the outcome of these edge-case scenarios:
What I want to achieve is to make sure we have a record of why we decided not to support that scenario so that I can refer back to it in the future.
As you are writing a design/specification document, one approach you could take is to version that document. Then, when a feature/scenario is taken out you can note within a version change section in your document why the change was made. You can then refer to this change history at a later date.
However I'd recommend using a planning board to keep track of your user stories. Using such a board you could write a note on the card (virtual/physical) explaining why the feature was dropped which also could be referred to at a later date.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
It's no question that we should code our applications to protect themselves against malicious, curious, and/or careless users, but what about from current and/or future colleagues?
For example, I'm writing a web-based API that accepts parameters from the user. Some of these parameters may map to values in a configuration file. If the user messes with the URL and provides an invalid value for the parameter, my application would error out when trying to read from that section of the configuration file that doesn't exist. So of course, I scrub params before trying to read from the config file.
Now, what if down the road, another developer works on this application, adds another valid value for this parameter that would pass the scrubbing process, but doesn't add the corresponding section to the configuration file. Remember, I'm only protecting the app from bad users, not bad coders. My application would fail.
On one hand, I know that all changes should be tested before moving to production and something like this would undoubtedly come up in a decent testing session, but on the other hand, I try to build my applications to resist failure as best as possible. I just don't know if it's "right" to include modification of my code by colleagues in the list of potential points of failure.
For this project, I opted not to check if the relevant section of the config file existed. As the current developer, I wouldn't allow the user to specify a parameter value that would cause failure, so I would expect a future developer to not introduce behavior into a production environment that could cause failure... or at least eliminate such a case during testing.
What do you think?
Lazy... or philosophically sound?
"I just don't know if it's "right" to include modification of my code by colleagues in the list of potential points of failure."
It isn't right to prevent your colleagues from breaking things.
You don't know what new purposes your software will be put to. You don't know how it will be modified in the future.
Instead, do this.
Write simple correct software, put it into production, and stop worrying about somebody "breaking" something.
If your software is actually simple, other people can maintain it without breaking it.
If you make it too complex, they will (a) break it anyway, in spite of everything you do and (b) hate you for making it complex.
So, make it as simple as possible to do the job.
It sounds like you're taking reasonable steps to protect against incompetence by doing some scrubbing of the input. I don't believe that you're responsible for protecting against any possible misuse of your code or bad input. I'd go further than that and say that as long as your code explicitly documents what is and isn't an acceptable input then you've done enough, especially if the added "idiot error checking" code is bloated or (especially) slower.
A procedure that documents exactly what inputs are acceptable is reasonable for an inner api. That being said, I often code (over) defensively, but that's mostly due to the environment I'm in and my level of trust in the rest of the code.
Lazy... or philosophically sound?
Lazy... and arrogant. By coding in a way that makes mistakes show up quickly, you protect the app against your own mistakes just as much as against the mistakes of others. Everyone makes more mistakes than they think.
Of course, rather than adding more code to detect whether the config file and the parameter checking match, it would be much better if the parameter checking were based on the config file so that there's only one place where new values are added and an inconsistency is not possible.
I think it is the future developer's responsibility to ensure that she does not introduce bugs or failure points into your code. When you sign off from the project (if ever?!), then part of the signing process should at least be that the code has been presented as bug free as possible, this would then limit the liability you hold for future problems.
If your code is kept in a version control system, it would be trivial for you to create a tag which would mark the point at which you handed over your code, thus enabling you to compare the current codebase to your original should a bug arise which some one may be trying to blame on you (if this is the angle you're coming from!), therefore allowing you to prove that it is the changes that have been made to your original implementation which has caused these bugs. (Assuming of course that the implementations do cause unexpected behaviour and don't fix your "bug-free" code grin).
One method I have used in the past to ensure data integrity (and it isn't fool proof), is to check an input at specific offsets for specific values, ensuring that input wasn't tainted.
Hope that helps.
Both ways are fine, philosophically speaking. I would make the judgment based on how likely you think it is for this to happen. If you can be almost positive that somebody will break your code in that way, it might be polite for you to provide a check that will catch that when it happens.
If it doesn't seem particularly likely, then it's just part of their job to make sure they don't break the code.
In either event though, your technical notes (or other appropriate documentation) should clearly indicate that when the one change is made, the other change is also required.
I would use an inline comment for future developers, or for a developer like me who tends to forget what was going on in every part of the application after I haven't worked on it for months.
Don't worry about actually coding to foil future coders, that's impossible. Just add all the information someone needs to support or extend what you're doing within the source code context.
This is documentation, as Steve B. mentioned. I'd just make sure it's not external, as that has a tendency to get lost.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
What UI/GUI guidelines should be followed that subtly (or not so subtly) direct users so they don't shoot themselves in the foot.
For instance, you might want to give power users the ability to "clean" a database of infrequently used records, but you don't want a new user to try out that option if they've just spent hours entering new records - they may lose them all because they're 'infrequently used'. Please don't address this specific issue - it's just here to clarify the question.
While one could code a bunch of business logic in place to prevent some issues, you can't account for everything a user might do.
What are some common techniques, tips, and tricks that prevent improper usage?
ie, How should I design the interface to alert users that a function or action is to be taken with care
What should I design in that limits risk and exposure if a poor action is taken?
-Adam
Everything can be undone. Don't erase - deactivate. Back up before every destructive operation, and give the user a way to restore.
That's the path. It's hard to follow it all the way, but it's what you're aiming for.
Make it possible to undo dangerous actions.
If it's a reasonably big application or system, require separate admin access for dangerous operations as well.
Don't Make Me Think
And you can, in fact you HAVE to account for everything they might do. Because you (as the designer) as the one who gives them the ability to do all those things.
Before putting ANY item on a gui as yourself "Can this be misused?" and if it can, you might want to go with a lower level of customizability.
Example hierarchy
Button - Can be clicked.
T/F Radio Button (mandatory) - Only two options.
Combo Box - Many options, possibly "no option". more confusing.
Text field - Myriad of wildly inconsistent options. More confusing for user, more dangerous for coder.
Basically, if the user doesn't need extra options, then don't give them extra options. You'll only confuse them.
This is an old article, but it's still a great one:
Microsoft Inductive User Interface Guidelines
Never rely on anything that says "Are you sure?" The user is ALWAYS sure and that's if they even bothered to read it before dismissing.
Partition your users and have fine-grained permissions.
Define some power-user permissions that enable the "more dangerous" operations.
Power user permission is not given out lightly -- only to actual power users -- and revoked readily.
I'm of the school of thought that, in case of inclarities or ambiguousity, the user is rarely wrong, and the UI is always to blame. So, when you say "punch the user in the face", tagged with "pebkac", I'm thinking that you would do good with a slap in the face.
Unfortunately, I'm unable to give any good UX-advice, since I'm a mere programmer, and therefore more or less by definition disqualify as a good UI-designer. I'd like just to point out the possibility, that you actually could be the one who needs to get a clue, and try to be more humble towards the users.
Edit for Adam:
The little I know about UX is how little I know about it. It's an entire career path. I know for a fact that there's very little anyone can learn by asking a single make-me-good-at-this question at Stack Overflow. It's like me asking "help me write better code", with the body text formulated as a story of how my colleagues ridicule me of my code.
We, programmers, are engineers. We like order and reason and logical decisions. But the average user is not a programmer, not an engineer and, in many cases, not interested in computers themselves the very least bit.
I'm glad that people are giving you nuggets of good advice, and I'm glad that you, contrary to my first impression (I'm sorry about that), are eager to take those bits and understand the needs of the user.
But the point remains: You need to buy books (Don't Make Me Think is a great place to start, as already recommended). You need to watch how people use your software. You need to observe where they stumble, and jig things around until your UIs seem natural.
I'm sorry I still can't give you an answer. Because I don't have it. And even if I would have it, I would probably have to charge you 50EUR an hour, for years into the future.
Make the results of the user's action visible and offer a way to undo those changes.
When the changes are visible, then the user gets feedback of whether the results were what he intended to do, and if they are not, then the possibility to undo will let the user to try again to reach his goal. If possible, make the results of the action visible before the user invokes the action (for example, when dragging some element, show what would happen if the user would release the mouse button, for example visualize addition of the element where it will be moved to and visualize the removal of the element from where it was moved from).
There are a couple of types of undo. The most simple is a single-step undo (as in Notepad), but it is often not enough. Better is a multi-step undo (as in Word), which covers most of the cases, but does not allow undoing a specific action without undoing all the actions that have been done after it. That can be solved by object-specific undo, for example in a form with many fields (or cells in a grid like in Excel), right-clicking the field would show a list of previous values in that field. For deleted data you could have a store of deleted data, from where the user can restore things after deleting them (for example if the user deletes a slide in Powerpoint). And finally you could have a full version history of every change, for example as Local History works in IntelliJ IDEA - make a history entry every time the file is saved (and save everything automatically after a couple seconds of inactivity).
Confirmation dialogs don't help. The user might read it the first time, but soon after that clicking "OK" in the dialog becomes an automated process, and the user will press Enter before the dialog even shows up. Then the confirmation dialog has become just a source of unnecessary mechanical work. The user is always sure about doing some action, even when he is wrong - otherwise he would not have done that action.
Well there are a few different ways that I can/do go about these types of things.
User documentation - first and foremost give them some documentation to work with, and make the systems easy for them to use. Just general usability and descriptive names/actions for everything.
Provide confirmation screens with warnings. Full disclosure of what the action is going to do, with the warnings inside of a yellow box. It draws attention to it and helps prevent the need for the other items.
Have a roll-back plan. For large risky operations you can either simply set "deleted" flags, or offload the data to a temporary "recycle bin" of sorts should they accidentally remove/modify data that was unintended.
Require multiple approvals, for data purge operations especially go to a two-tiered approach, requiring approval from separate users.
These are just a few of the ideas that I have.
Two things immediately come to mind.
The first is the notion of progressive disclosure, i.e., only show users what they need in order to accomplish the task at hand. How many UIs have we seen that have hundreds of controls on a single dialog? Divide the controls into their respective tasks and only allow the user to do a single task at a time. An Advanced button on a dialog is one way to implement this, and this concept has the added benefit of separating the power users from the run-of-the-mill users. Run-of-the-mill users are less likely to attempt a task that is likely to be beyond their skill level.
The second is to leverage the wizard concept for complicated tasks. I know wizards have fallen out of style, but if a task is truly complicated, users usually appreciate having their hands held the first few times. A good example of this is the WinZip wizard interface. If you've never zipped a file before, this wizard uses a logical progression to walk you through the process. And then, once you've grown comfortable with it, you can switch to the classic interface to zip files more quickly.
Of course to do all of this requires a committment not only by the developers, but by management. And that, sadly, is where many of these usability battles are lost.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm building Desktop Software for over 10 years now, mostly it's simple Data-Input Software. My problem is, it's always looking the same: A Treeview on the Left and a lot of Text/Data Fields to the right, depending on the type of data currently is worked on. Are there any fresh ideas how such software nowadays should look like?
For further clarification:
It's very hierarchical data, mostly for electronic devices. There are elements of data which provide static settings for the device and there are parts which describe some sort of 'Program' for the device. There are a lot (more than 30) of different input masks. Of course i use combo boxes and Up/Down Entry Fields.
Having all of your software look the same thing is a good thing. One of the best ways to make it easy for people to use your software is to make it look exactly the same as other software your users already know how to use.
There are basically two common strategies for how to handle entry of a lot of data. The first is to have lots of data entry fields on one page. The next is to have only a few data entry fields but a lot of pages in a sort of wizard-style interface. Expert users find the latter much slower to use, as do users who are entering data over and over again. However, the wizard style interface is less confusing for newer users since it offers fewer elements at once and tends to provide more detail on them.
I do suggest replacing as many text fields as possible with auto-complete-based combo-boxes. This allows users to enter data exactly the same as with text-boxes, but also allows users to save typing by hitting the down key to scroll through choices after typing part of the data in.
Providing more detail on what data is being entered would probably yield more specific answers.
I'd also answer with a question, which is to ask what your motivation for considering a change is? Like the other posters, I'd agree that there is some value in consistency, but there's also a strong value in not ignoring niggles-in-the-back-of-the-mind feelings you have. Maybe you have a sense that your users aren't as productive as you'd like them to be, or you've heard feedback to that effect from your customers, or you're just looking to add some innovation for your own interest. Scratching itches is a good trait in a developer, in my view.
One thing I'd advocate would be a detailed user study. How much do you know about what your users do with the interfaces you create? Do you know the key tasks, the overall workflow? Would you know if one task regularly consumed 60% of your users' time, or if there was a task that was only performed once a month? Getting a good sense of what the users actually do (and not what they say they do) is a great place to start thinking about what changes might be worthwhile, especially if you can refactor the task to get a qualitatively different user experience.
A couple of specific alternative designs you might like to include in re-visioning the UI might be be facet browsing (works well for searching and exploring in hierarchies), or building a database of defaults / past responses so that text boxes can use predictive completion. However, I think my starting point would be the user study.
Ian
If it works...
Depending on what you've got happening with the data (that is, is it hierarchical, or fairly flat), you might want to try a tab-based metaphor, or perhaps the "Outlook-style", with a sidebar showing the sections of an application. One other notion I've played with lately is the "Object desktop" that I first saw proposed by Scott Ambler (Building Object Applications That Work). In this, you can display collections of items, or the user can "peel off" individual records for easy access.
Your information is not enough to really suggest you an interface alternative. However, may I answer your question with a question? Why do you think you have to change it? Has your customer complained? If not, it looks like your customer is happy with the way the software works right now, thus I wouldn't change it. If your customer complains about it, he'll most likely not just say "It's bad", he will say "Why can't it look like ..." and this will give you an idea how to change it.
I once had to re-design a very outdated goods management system. The old one was written for a now dead database system, still running in MS-DOS. The customer suggested I should create a prototype how this re-implementation might look like and then he'll decide if I get that job or not. I replaced the old, dead database with a modern MySQL database, I replaced the problematic shared peer access with a client server approach and I chose to rewrite the UI in Java, since different OSes were used and this had the smallest porting costs. So far the concept seemed good, the customer liked it. However, when he asked his employees what they think about it, they asked "So far it's great, but we have one question: Why doesn't it look like the old one?". Actually, it turned out that even with all the modern technologies, they wanted the interface to exactly look and being operated like the old one. So I had to re-build a 1986 usability nightmare MS-DOS UI in Java, because no other UI was accepted.
For me it is more about a clean, usable, logical design than anything else. If your program makes sense to the user, isn't clunky and works as advertised, then everything else UI related is essentially just like painting the house. I've sometimes rolled out a new version of a program with essentially the same controls that are skinned differently.
There's a reason that you've probably chosen the tree view - because it probably makes really good sense to do so. There are different containers and controls available in the various UI libraries, depending on the language, but I tend to stick with the familiar because the user probably gets how a tree control works and how a combobox works.
A user interface needs to be usable, just don't do the misstake to change to something working to something fancy-schmancy just because it looks better (been down that road)...
Make sure that added
widgets/controls really add a business value
Make sure that the added
widgets/controls do not mess up your
architecture (too much) and makes
the application harder to
manage/maintain
Try to keep platform standards on
how to do things (for example the Vista ux guidelines)
:)
//W
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I work for a CMMI level 5 certified company and one thing I hate about is the amount of documents we prepare (As a programmer I already hate documents). We have lots and lots of documents like PID(project initiation doc), Business requirements, System requirements,tech spec, Code review checklist, issue logs, Defect logs, Configuration management plan, Configuration management check list(s), Release documents and lots...
Almost 90% of these docs are just done for the sake of QA audit :) .. What do you think are the most important documents for a project? What documents can be used in the long run by another developer?
Please share your good practices here. I would like to use them for my own projects or the company I am planning to start in the long run.
Thanks
The key document is a good functional spec. There should be one and only one reference document for a system.
Overdoing documentation proliferates a large number of small requirements and spec documents every time someone changes a system or interface. For a system of any complexity, before long you have your spec distributed around several hundred assorted word, excel, visio and even powerpoint files. When this happens you lose clarity about what is current or even whether you have located and identified all pertinent documentation.
The BRD-SRD-Tech spec progression is based on an assumption that the business signs off the BRD, a business analyst signs off the SRD against requirements documented in the BRD and the technical specification is signed off against the SRD. This generates a web of sign-offs, multiple documents with redundant information and makes it difficult and clumsy to keep the spec documents up to date.
Because of this, subsequent requirements documentatation tends to take the form of a series of change request and supplemental requirement and spec docs, each with their own sign-off and audit process. You gain CYA and audit trail (or at least the appearance of an audit trail), but you lose clarity. There is now no definitive reference document for the system and it is difficult to establish what is current or relevant to any particular activity. The net result is that your business analysis process gets bogged down in forensic research, which adds overheads and latency to delivery schedules.
A spec document should be built in such a way that there is one definitive reference for any given system or subsystem. The document should be kept up to date and versioned. Get a good technical documentation tool like Framemaker, so your process can scale, and the document has some structural integrity of the sort lacking on Word.
For me the only real document I ever use is a spec. The more detail the better. However it doesnt need to be all completed at one time, and it doesnt need to be particularly formal. What is far more useful to me than documents that are checked and signed and double checked and double signed is always being able to get the latest version of a document. And being able to talk to people about what they have written, and get a decision in the case of any ambiguity. this is far more useful to me than anything else.
To sum up: a spec is the only document I have ever found useful, however it pales in comparison to having a project manager who knows the proposed system inside out, and can make sensible decisions based on what they know.
Documentation is like tofu -- most people hate it until they realize that under the right conditions, it can be really good.
The problem is that what you consider documentation is mostly made for documentation's sake. You, as a developer, don't see any immediate value in the documents you produce because you know you can do your job without all the TPS reports which you're required to make.
Unfortunately, I'm going to wager that there's not a lot you can do about in a company where you're being forced to eat raw tofu all the time. You'll probably just have to suck it up and write the docs which your company requires, but you can at least do one thing... you can write documents which at least are useful to you, and you can pass them along with your code for others who will maintain it.
Aside from inline documentation, you could set up a wiki to be used by yourself and people on your team. This type of documentation is searchable, which is already a big plus to developers, plus it's more of a living document instead of a homework-like paper you had to write. You already post to SO, so just think of your documentation as pooling your knowledge in a more useful place.
What do you think are the most important documents for a project?
Different people have different needs: for example the documents which the owner needs (e.g. the business contract) aren't the same as the documents which QA needs.
What documents can be used in the long run by another developer?
IMO the most important document (except for the source code) is the functional specification: because what the software is supposed to do (as opposed to, what it is doing) is the one thing that can't necessarily be reverse-engineered. See also How does a good developer keep from creating code with a low bus hit factor?
User Stories, burndown chart, code
I'm a fan of the old 4+1 views:
Use Case view (a/k/a user stories). There are several forms: proper use cases, forward-looking use cases that aren't as well defined and epics which need to be decomposed.
Logical view. The "static" view. UML Class diagrams and the like work well here as a design document. This also includes request and response formats for various protocols. Here is where we document the RESTful requests and responses. This includes the REST URI design.
Process view. The "dynamic" view. UML activity diagrams, sequence diagrams and statecharts and the like for here for design documents. In some cases, simple narratives work well. In other cases, there's a State design pattern, and it requires a combination of class diagrams and statecharts to show how the stateful objects interact.
This also includes protocols (e.g. REST). Here is where we define any special processing for the various REST requests.
This also includes an authentication or authorization rules, and any other cross-cutting aspects like security, logging, etc.
Component view. The pieces we're building for deployment. This includes the stuff we depend on, the structure of the modules and packages, etc. This is often a simple component diagram or a list of components and their dependencies.
Deployment view. We try to generate this from the code as deployed. Since we're using Python, we use epydoc to create the API documentation. We also use Sphinx to import module documentation into this view of the software.
This also includes the parameters, settings, and configuration details.
This, however, isn't sufficient.
When projects start, you have to work up to this through a series of sprints.
The first sprints build just the use case view.
Subsequent sprints build an "architecture" to implement the use cases. The architecture document has 4+1 views, but at a high level of abstraction. It summarizes the structure of the model schemas, the requests and replies, the RESTful processing, other processing, the expected componentry, etc. It never has a Deployment view. We generally reference operator guide and API documents as the deployment view of an architecture.
Then design-and-construction sprints build (and update) detailed 4+1 view documents for various components.
Then release sprints build (and update) the deployment views.
From the project point of view, the most important documents are those that normally include the word Plan, such as the Project Plan, Configuration Management Plan, Quality Plan, etc.
What you are describing is common in process improvements, and normally responds to two major causes. One is that the system really is overeaching and getting in the way of real work being done. Another is actually answered in your question: it is not that the documents are only done for the sake of audits, and your focus should not just be how usefull is the doc for other developers, but for the project or the company as a whole.
One usually looks at things from it's own perspective, sometimes it's necessary to look at the general picture.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've seen different program managers write specs in different format. Almost every one has had his/her own style of writing a spec.
On one hand are those wordy documents which given to a programmer are likely to cause him/her missing a few things. I personally dread the word documents spec...I think its because of my reading style...I am always speed reading things which I think will cause me to miss out on key points.
On the other hand, I have seen this innovative specs written in Excel by one of our clients. The way he used to write the spec was kind of create a mock application in Excel and use some VBA to mock it. He would do things like on button click where should the form go or what action should it perform (in comments).
On data form, he would display a form in cells and on each data entry cell he would comment on what valid values are, what validation should it perform etc.
I think that using this technique, it was less likely to miss out on things that needed to be done. Also, it was much easier to unit test it for the developer. The tester too had a better understanding of the system as it 'performed' before actually being written.
Visio is another tool to do screen design but I still think Excel has a better edge over it considering its VBA support and its functions.
Do you think this should become a more popular way of writing spec? I know it involves a bit of extra work on part of project manager(or whoever is writing the spec) but the payoff is huge...I myself could see a lot of productivity gain from using it. And if there are any better formats of specs that would actually help programmer.
Joel on Software is particularly good at these and has some good articles about the subject...
A specific case: the write-up and the spec.
Two approaches have worked well for me.
One is the "working prototype" which you sort of described in your question. In my experience, the company contracted a user interface expert to create fully functional HTML mocks. The data on the page was static, but it allowed for developers and management to see and play with a "functional" version of the site. All that was left to do was replace the static data on the pages with dynamic content - this prototype was our spec for the initial version of our product. The designer even included detailed explanation of some subtle behavior in popup dialogs that would appear when hovering over mock links. It worked well for our team.
On a subsequent project, we didn't have the luxury of the UI expert, but we used similar approach. We used a wiki to mock a version of the site. We created links between the functional aspects of the system and documented each piece of functionality in detail. Each piece of functionality could, in turn, link to detailed design and architecture decisions. We also used to wiki to hold our to list feature list for each release (which became our release notes). These documents linked back to the detailed feature page. The wiki became a living document - describing our releases and evolution of our system in great detail. It was an invaluable resource.
I prefer the wiki to the working prototype because it's more easily extensible - growing and becoming more valuable as your system evolves.
I think you may have a look about Test-Driven Requirements, which is a technique to make executable specifications.
There are some great tools like FIT, Fitnesse, GreenPepper or Concordion for that purpose.
One of the Microsoft Press books has excellent examples of various documents, including an SRS (which I think is what you are talking about). It might be one of the requirements books by Weigert (I think that's his name, I'm blanking on it right now). I've seen US government organizations use that as a template, and from my three work experiences with the government, they like to make their own whereever they can, so if they are reusing it, it must be good.
Also - a spec should contain NO CODE, in my opinion. It should focus on what the system must do, should do, and can not do using text and diagrams.