Increase quality of crowd-sourced info - crowdsourcing

I'm working on an app that gives traffic alerts in real time and is based on crowd-sourced information. In other words, people use the app and report traffic problems and at the same time they are informed about traffic problems in their area.
A difficult task is how to distinguish real alert reports from fake ones so that the app behaves properly and is useful.
Do you know of any documentation regarding this issue or any programmer stories, insights into this problem? How should this problem be tackled?
What I've come up until now is:
each person using the app is uniquely identified
each alert report has a reliability value in an interval 1 .. x
the reliability of a report is calculated based on the number of users that reported it or confirmed it and the reputation of those people. But how exactly?
each person has a reputation value which is calculated somehow. But how?
I'm not sure how to handle the reputation/reliability stuff so I'd love some input on this. There must be some documentation on how to create a crowd-sourcing product that works.

Panos Ipeirotis has a fabulous talk on this subject. Yes, you want to incentivize good behavior. If Waze is too complicated this talk will be too, but it will give you a good idea of what is possible if you throw the kitchen sink at it.

Related

How to correlate the below given scenario for check boxes?

in my script i have a scenario like the page contains multiple check boxes for example 10, as per the user need user selects check boxes for example one user selects 4 check boxes and other user clicks 5 check boxes, so per each it will vary.
so how to correlate those values,
thanking you.
From the website: "Please don’t share your solutions, ask for help, or help others. This is meant to be a challenge."
So you appear to be violating one of the primary rules in this website. I have looked at this challenge and it's really good to gauge someone's knowledge.
However, to address technology generally - in reading your question I get the sense you may be missing certain fundamental knowledge for this kind of thing. Here's some fundamental knowledge. Hopefully my answer will help increase your knowledge. And hopefully you can use this increased general knowledge to address this specific question.
Definitions:
Correlation - you're taking data the SERVER sends to the browser, capturing it and sending it back. Information present on web pages would fit into this category.
Parameterization - you've got a set of values you'd like to put into web forms. This is usually values like names, addresses, etc
Also understand exactly what is happening when you conduct certain actions on your browser. When you "click" a checkbox does that actually send a message to a server? That usually doesn't (though not always) happen. So when you use phrases like 'click a checkbox' that tells me you may not appreciate the fact that performance testing is server focused, not browser focused.
Performance testing isn't intuitive so you need to understand these concepts. If you dedicate time to understanding the concepts I've outlined above you'll have the knowledge to complete the challenge.
Good luck.
What is driving the variation on check boxes being checked? Is it the result of something that comes back from the server, from a previous request? Or is it somewhat random based on whatever the user wants to do at runtime?

How to get users to read error messages? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
If you program for a nontechnical audience, you find yourself at a high risk that users will not read your carefully worded and enlightening error messages, but just click on the first button available with a shrug of frustration.
So, I'm wondering what good practices you can recommend to help users actually read your error message, instead of simply waiving it aside. Ideas I can think of would fall along the lines of:
Formatting of course help; maybe a simple, short message, with a "learn more" button that leads to the longer, more detailed error message
Have all error messages link to some section of the user guide (somewhat difficult to achieve)
Just don't issue error messages, simply refuse to perform the task (a somewhat "Apple" way of handling user input)
Edit: the audience I have in mind is a rather broad user base that doesn't use the software too often and is not captive (i.e., no in-house software or narrow community). A more generic form of this question was asked on slashdot, so you may want to check there for some of the answers.
That is an excellent question worthy of a +1 from me. The question despite being simple, covers many aspects of the nature of end-users. It boils down to a number of factors here which would benefit you and the software itself, and of course for the end-users.
Do not place error messages in the status bar - they will never read them despite having it jazzed up with colours etc....they will always miss them! No matter how hard you'll try... At one stage during the Win 95 UI testing before it was launched, MS carried out an experiment to read the UI (ed - it should be noted that the message explicitly stated in the context of 'Look under the chair'), with a $100 dollar bill taped to the underside of the chair that the subjects were sitting on...no one spotted the message in the status bar!
Make the messages short, do not use intimidating words such as 'Alert: the system encountered a problem', the end-user is going to hit the panic button and will over-react...
No matter how hard you try, do not use colours to identify the message...psychologically, it's akin to waving a red-flag to the bull!
Use neutral sounding words to convey minimal reaction and how to proceed!
It may be better to show a dialog box listing the neutral error message and to include a checkbox indicating 'Do you wish to see more of these error messages in the future?', the last thing an end-user wants, is to be working in the middle of the software to be bombarded with popup messages, they will get frustrated and will be turned off by the application! If the checkbox was ticked, log it to a file instead...
Keep the end-users informed of what error messages there will be...which implies...training and documentation...now this is a tricky one to get across...you don't want them to think that there will be 'issues' or 'glitches' and what to do in the event of that...they must not know that there will be possible errors, tricky indeed.
Always, always, be not afraid to ask for feedback when the uneventful happens - such as 'When that error number 1304 showed up, how did you react? What was your interpretation' - the bonus with that, the end-user may be able to give you a more coherent explanation instead of 'Error 1304, database object lost!', instead they may be able to say 'I clicked on this so and so, then somebody pulled the network cable of the machine accidentally', this will clue you in on having to deal with it and may modify the error to say 'Ooops, Network connection disconnected'... you get the drift.
Last but not least, if you want to target international audiences, take into account of internationalization of the error messages - hence that's why to keep it neutral, because then it will be easier to translate, avoid synonyms, slang words, etc which would make the translation meaningless - for example, Fiat Ford, the motor car company was selling their brand Fiat Ford Pinto, but noticed no sales was happening in South America, it turned out, Pinto was a slang there for 'small penis' and hence no sales...
(ed)Document the list of error messages to be expected in a separate section of the documentation titled 'Error Messages' or 'Corrective Actions' or similar, listing the error numbers in the correct order with a statement or two on how to proceed...
(ed) Thanks to Victor Hurdugaci for his input, keep the messages polite, do not make the end-users feel stupid. This goes against the answer by Jack Marchetti if the user base is international...
Edit: A special word of thanks to gnibbler who mentioned another extremely vital point as well!
Allow the end-user to be able to select/copy the error message so that they can if they do so wish, to email to the help support team or development team.
Edit#2: My bad! Whoops, thanks to DanM who mentioned that about the car, I got the name mixed up, it was Ford Pinto...my bad...
Edit#3: Have highlighted by ed to indicate additionals or addendums and credited to other's for their inputs...
Edit#4: In response to Ken's comment - here's my take...
No it is not, use neutral standard Windows colours...do not go for flashy colours! Stick to the normal gray back-colour with black text, which is a normal standard GUI guideline in the Microsoft specifications..see UX Guidelines (ed).
If you insist on flashy colours, at least, take into account of potential colour-blind users i.e. accessibility which is another important factor for those that have a disability, screen magnification friendly error messages, colour-blindness, those that suffer with albino, they may be sensitive to flashy colours, and epileptics as well...who may suffer from a particular colours that could trigger a seizure...
Show them the message. Due dilligence and all, but log every error to a file. Users can't remember what they were doing or what the error message was seconds after the event, it's like eye-witness accounts of perpatrators.
Provide a good way to allow them to email or upload the log to you so that you can assist them in reconciling the issue. If it's a web application: even better, you can be receiving information about the situation ahead of anyone even reporting the problem.
Short answer: You can't.
Less short answer: Make them visible, relevant, and contextual (highlight what they messed up). But still, you're fighting a losing battle. People don't read on computer screens, they scan, and they've been trained to click the buttons until the dialog boxes go away.
We put a simple memorable graphic in the error box: not an icon, a fairly large bitmap, and nothing like the standard Windows message icons. Nobody can ever remember the wording of a messagebox (most won't even read it if the box has an "OK" button they can press), but most people DO remember the picture they saw. So our support people can ask the customer "did you see the coffee-drinking guy?" or "did you see the empty desk?". At least that way we know roughly what went wrong.
Depending on your user base, writing funny/rude/personal error messages can work great.
For instance, I wrote an application which allowed our HR people to better track the hire/fire dates of employees. [we were a small company, very laid back].
When they entered wrong dates I would write:
Hey dumb ass, learn how to enter a date!
EDIT: Of course a more helpful message is to say: "Please enter date as mm/dd/yyyy" or perhaps in code to try and figure out what they entered and if they entered "blahblah" to show an error. However, this was a very small application for an HR person I knew personally. Hence again people, read the first line of this post: Depending on your user base...
I recently worked on an Art Institute project, so the error messages were geared towards the audience, such as:
Most art before the Baroque period was
unsigned. However, we’re beyond the
Baroque period now, so all fields must
be completed.
Basically gear it to your audience if at all possible, and avoid boring as all unearthly general errors such as: "please enter email" or "please enter valid email".
Alerts/popups are annoying, that's why everyone hits the first button they see.
Make it less annoying. Example: if the user entered the date incorrectly, or entered a text where numbers are expected, then DON'T popup a message, just highlight the field and write a message somewhere around it.
Make a custom message box. Do not ever use the default message box of the system, for example Windows XP message boxes are annoying themselves. Make a new colored message box, with a different background color than system default.
Very Important: do not insist. Some message boxes use the Modal dialog and insist on making you read it, this is very annoying. If you can make the message box appear as a warning message it would be better, for example, Stack Overflow messages that appear right on the top of the page, informing but not annoying.
UPDATE
Make the message meaningful and helpful. For example, do not write something like, "No Keyboard found, press F1 to continue."
The best UI design will be where you virtually never show an error message. The software should adapt to the user. With that sort of a design, an error message will be novel and will grab the users attention. If you pepper the user with senseless dialogs like that you're explicitly training them to ignore your messages.
In my opinion and experience, it's the power users, who do not read error messages. The nontechnical audience I know reads every message on the screen most carefully and the problem at this point mostly is: They don't understand it.
This point may be the cause of your experience, because at some point they will stop reading them, because "they don't understand it anyway", so your task is easy:
Make the error message as easy to understand as possible and keep the technical part under the hood.
For example I transfer a message like this:
ORA-00237: snapshot operation disallowed: control file newly created
Cause: An attempt to invoke cfileMakeAndUseSnapshot with a currently mounted control file that was newly created with CREATE CONTROLFILE was made.
Action: Mount a current control file and retry the operation.
to something like:
This step could not be processed due to momentary problems with the database. Please contact (your admin|the helpdesk|anyone who can contact the developer or admin to solve the problem). Sorry for the inconvenience.
Show users that the error message has a meaning, and it's a way to provide assistance to them and they will read it. If it's just jargon-bable or generic nonsense message they will learn to dismiss them quicly.
I have learned that is very good practice to include an error dialog with default action to send (eg. via email) detailed diagnostic info, if you quickly respond to those emails with valuable information or workaround, they will worship you.
This is also a great learning tool. In future versions you can solve known-issues or at least provide in-place workaround info. Until then users will learn that this message is caused by X and the problem can by solved by Y - all because someone did explain it to them.
Of course this won't work on a large scale application, but works very well in enterprise applications with few hundred users, and in a lean agile, release early release often, environment.
EDIT:
Since you have a broad user base I recommend to provide software that does what users are/can expect it to do, eg. do not show them eroror message if phone number is not formatted well, reformat if for them.
I personally like software that does not make me think, and when occasionally there is nothing you (the developer) can do to interpret my intention, provide a very well written (and reviewed by actual users) messages.
It's common knowlege that people do not read documentation (did you read instructions back-to-back do when you did plugged in household appliance?), they try a way to get results quickly, when failed you have to grab their attention (eg. disable default button for a while) with meaningful and helpful info. They don't care about your sofware failure, they want to get results, now.
One good tip I've learned is that you should write a dialog box like a newspaper article. Not in the size-sense, but in the importance-sense. Let me explain.
You should write the most important things to read, first, and provide more detailed information second.
In other words, this is no good:
There was a problem loading the file, the file might have been deleted, or
it might be present on a network share that you don't have access to at
your present location.
Do you want to retry opening the file?
Instead, change the order:
Problem loading file, do you want to retry?
There was a problem loading the file, the file might have been deleted, or
it might be present on a network share that you don't have access to at
your present location.
This way, the user can read just as much as he wants, or bothers, and still have an idea about what's being asked.
To start, write error messages that users can actually understand. "Error: 1023" is not good example. I think better way is logging the error, than showing it to the user with some "fancy" code. Or if logging is not possible, give the users proper way to send the error details to the support department.
Also, be short and clear enough. Do not include some technical details. Do not show them information that they cannot use. If possible provide a workaround for the error. If not provide a default route, that should be taken.
If your application is a web app, designing custom error pages is a good idea. They stress users less, take SO for example. You can get some ideas how to design a good error page here: http://www.smashingmagazine.com/2007/07/25/wanted-your-404-error-pages/
Make them fun. (It seemed relevant, given the site we're on :) )
One thing I'd like to add.
Use verbs for your action buttons to close your error messages rather than exclamations, example don't use "Ok!" "Close" etc.
Unless you can provide the user some simple work-around, don't bother showing the user an error message at all. There is just no point, since 90% of users won't care what it says.
On the other hand If you CAN actually show the user a useful workaround, then one way to force them to read it is make the OK button become enabled after 10 seconds or so. Sort of how Firefox does it whenever you are trying to install a new plug-in.
If it is a total crash that you cannot gracefully recover from, then inform the user in very layman terms saying:
"I'm sorry we screwed up, we would like to send some information about this crash, will you allow us to do so? YES / NO"
In addition, try not to make your error messages longer than a sentence. When people (me included) see a whole paragraph talking about the error, my mind just shuts off.
With so much social media and information overload, people's mind freeze when they see a wall of text.
EDIT:
Someone once also recently suggested using comic strips along with whatever message you want to show. Such as something from Dilbert that may be close to the type of error you may have.
From my experience: you don't get users (especially non-technical ones) to read error messages. No matter how clear and understandable, bold, red and flashing the message is, that you display, most users will just click anything away that they're not used to, even if it's "Do you really want to delete everything?". I have seen users click the "window close"-icon instead of "OK" or "cancel" even though they didn't even know which option they chose by doing so ...
If you really need to force users to read what you're displaying, I'd suggest a JavaScript-Countdown until a button is clickable. That way the user will hopefully use the waiting time to really read what he's supposed to. Be careful though: most users will be even more annoyed by that :)
I furthermore like your idea of a "read more"-link, although I doubt that will get users more interested that just want to get rid of the message by all means ...
Just for the record: there are users that DO read error messages but are so afraid that they won't do anything with it. I once had a support call where the customer would read an error message to me, asking me, what he should do. "Well, what are your options?", I asked. "The window only has an 'OK'-button.", he replied. ... mmh, hard one :)
I often display the error in red (when the design allows it).
Red stands for "alert", etc. so it's more often read.
Well, to answer your question directly: Don't have your programmers write your error messages. If you follow this one piece of advice, you'd save, cumulatively, thousands of hours of user angst and productivity and millions of dollars in technical support costs.
The real goal, however, should be to design your application so users can't make mistakes. Don't let them take actions that lead to error massages and require them to back up. As a simple example, in a web form that requires all its fields to be filled in, instead of popping up an error message when users click on the Send button, don't enable the Send button until all the field contain valid content. It means more work on the back side, but it results in a better user experience.
Of course, that's a bit of an ideal world. Sometimes, program errors are unavoidable. When they do occur, you need to provide clear, complete, and useful information, and most importantly, don't expose the system to user and don't blame users for their actions.
A good error message should contain:
What the problem is and why it happened.
How to resolve the problem.
One of the worst things you can do is simply pass system error messages through to users. For example, when your Java program throws an exception, don't simply pass the programmer-ese up to the UI and expose it to the user. Catch it, and have a clear message created by your user assistance developer that you can present to your user.
I was lucky enough, on my last job, to work with a team of programmer who wouldn't think of writing their own error messages. Any time they found themselves in a situation where one was required and the program couldn't be designed to avoid it (often because of limited resources), they always came to me, explained what they needed, and let me create an error message that was clear and followed company style. If that was the default mindset of every programmer, the computing world would be a far, far better place.
Less errors
If an application throws vomit at you on a regular basis, you become immune to it, and errors become irritating background muzak. If an error is a rare event, it will garner more attention.
Quosh anything which isn't a major deal, throw out all those warnings, find ways of understanding user intent, take out the decisions wherever possible. I have a few apps which I continue to streamline in this way. Developers see every error as important, but this is not true from a user perspective. Look for the users' common response to a problem and capture that, deploy that as your response.
If you do need to raise an error: short, concise, low terror factor, no exclamation marks. Paragraphs are fail.
There's no silver bullet, but you need to socially engineer to make errors important.
We told users their manager had been contacted (which was a lie). It worked a little too well and had to be removed.
Adding an "Advanced" button that enables some more technical details will provide an incentive to read it for the part of the target audience that thinks itself as technical
I'd suggest that you give feedback (stating that the user made a mistake) immediately after the mistake is made. (For instance, when entering a value of a date field, check the value and, if it is wrong, make the input field visually different).
If there are errors on the page (I'm more into web development, hence I'm referring to it as a "page", but it can be also called "form"), show an "error summary", explaining that there were errors and a bulleted list of what exactly errors happened. However, if there are more than 5-6 words per message, those won't be read/understood.
How about making the button state "Click here to speak with a support technician who will assist you with this issue."
There are many websites that provide the option to speak with a real person.
I read a candidate for the most horrific solution on slashdot:
We have found that the only way to
make users take responsibility for
errors is to give them a penalty for
forcing the error to go away. For
starters, where possible, the error
wont actually close for them unless we
enter an admin password to make it go
away, and if they reboot to get rid of
it (Task Manager is disabled on all
client PC's) the machine will not open
the application that crashed for 15
minutes. Of course, this all depends
on the type of users you are dealing
with, as more technically adept users
wouldnt accept this kind of system,
but after trying for literally YEARS
to make users take responsibility for
crashes and making sure the IT
department is aware of them in order
to fix the issue before it gets too
hard to manage, these are the only
steps that worked. Now, all of our end
users are aware that if they ignore
errors, they are going to suffer for
it themselves.
"ATTENTION! ATTENTION! If you do not read error message you WILL DIE!"
Despite all the recommendations in the accepted answer, my users continued to click the first button they could find. So now I show this:
The user has to make a choice before the OK button appears
If he selects the 3rd option, he can continue, otherwise the application quits.

How to break someone into testing?

OK. Our product works. Beta testers are actually getting their stuff done. Time for the next iteration. But how to ensure quality? We need a tester!
How do I get someone fresh off the street started in testing? I have no clue on how to do it myself (I'm a developer, not a tester)!
We are a tiny team:
2 architects (as in buildings, not software, they are the domain experts here) figuring out what to build
me building it
and a new guy to do some testing before we push releases out
None of us has a clue on how to do this professionally. So far we have:
a bunch of virtual machines spanning the configurations we would like to test
various versions of windows
german and english, the two languages likely to be in use by our customers
the host software we are writing for (Autodesk Revit Architecture 2010, we are building a plugin for energy calculations)
a text document describing some tests I did (installed release xyz, did this, did that, etc.)
a bug tracking system the tester can add all the bugs he finds
I expect we will need a test script. But how? Who? What? When?
Why are you looking for "someone off the street"? To me, it sounds kind of like asking "I want to hire a new programmer, how do I get someone off the street and get him up to speed programming my software?". Why would you want to do that, over hiring someone who is a programmer already?
In your situation, which is that you don't know much about testing, I'd definitely think about hiring someone with experience in the field.
Specifically, I'd probably look for:
Someone with some experience performing tests under his belt (since you're going to want him actually doing tests).
Someone with some experience writing test plans/etc.
Someone with some experience running a QA team.
The last point is optional, but hopefully your team will be growing as your software grows, so it might make sense to get someone who can grow in the role as well (not to mention having the experience to help you decide when and how to grow the QA team).
Well, are you looking to expand your team with a tester? Have you considered just hiring a test specialist from a consultancy firm?
Before you get somebody to test, make sure you meet the requirements for testing. At a minimum you need:
A specification: Some authoritative source on what the application is supposed to do. This could be an expert that can answer any and all questions on exactly what the app is supposed to do, but the more that is written down and the more formally defined it is the better.
Time: Testing takes time. You can't hand off an application to the tester 30 minutes before it's supposed to go live and expect any worthwhile results. If you're doing waterfall development, testing will require a lot of time at the end. Lots of other development models let testing run in parallel with development, which saves a lot of time, but regardless of the model you use, testing will require more time than not testing.
If you don't have these two things, quality assurance is just a pipe dream.
Now if you do have those met, and you're trying to train somebody to test, here's my crash course on testing.
Fundamentally, testing an application means that you are attempting to ensure two things:
The program does what it is supposed to do.
The program does not do what it is not supposed to do.
That's the core mindset that I use. Building from that I approach things in terms of actions and attempt to verify:
An expected action with expected preconditions produces an expected effect.
An expected action with unexpected preconditions produces no effect or is handled appropriately.
An unexpected action produces no effect or is handled appropriately.
No unexpected effects occur.
Item 1 comes directly from the spec: You make sure that the program does what it is supposed to do.
Items 2 and 3 are where the art of testing comes in. What unexpected actions and preconditions can I perform? I could try to enter the wrong password. I could try to directly type in the URL of a supposedly secured page. I could try to paste odd unicode characters into a text field. I could try to put SQL or javascript code into a text field.
Item 4 is the infinite no-man's land of testing, the part that makes complete testing impossible. (2 and 3 are also infinite, but not as depressing to think about.) That doesn't mean you ignore it. You always keep an eye out for anything unusual. Also, sometimes inspiration strikes and you think of a possible way to cause an unexpected effect: "What happens if I log in between 11:59:59PM and 12:00:00AM on the third tuesday of the month? Oh look, it made me an administrator." Technical knowledge and a peek inside the black box help with coming up with scenarios like that.
There is a whole lot more to say about testing, but that's the bare minimum I can think of: The technical requirements and the approach to the problem.
Ideally, you'll need to give the tester:
training to make sure he knows the product to be tested.
documentation on what the expected results are.
test plans - what needs to be tested and how
a test tracking system to track what is being tested, what passed the tests, what needs to be fixed, etc. That system does not have to be too sophisticated, depending on the size of the project, an Excel spreadsheet may suffice.
In their podcast #64, Jeff and Joel discuss (among other things) what skills a good tester should possess. Transcript also available (about halfway down the page)

Ways to enhance a trial user's first time experience

I am looking for some ideas on enhancing a trial-user's user experience when he uses a product for the first time. The product is aimed at a particular domain and has various features/workflows. Experienced users of the product naturally find interesting ways to combine features to get the results they want (somewhat like using an IDE from a programmer's perspective).Trial users get to use all features of the product in a limited fashion (For ex: If there is a search functionality, the trial-user might see only the top 20 results, or he may be allowed to search only a 100 times). My question is: What are the best ways to help a trial-user explore/understand the possibilities of the product in the trial period, especially in the first 20 - 60 mins before the user gives up on the product?
Edit 1: The product is a desktop app (served via JNLP, so no install required) and as pointed out in the comments, the expectations can be different in this case. That said, many webapps do take a virtual desktop form and so, all suggestions are welcome.
Check out how blinksale.com handles this. It's an invoicing app, but to prevent it from looking too empty for a new account, they show static images in places where you'd actually have content if you used the app. Makes it look less barren at first until you get your own data in.
if you can, avoid feature limiting a trial. it stops the user from experiencing what the product is ACTUALLY like. It also prevents a user from finding out if a feature actually works like they want/expect/need it to.
if you have a trial version, and you can, optimise it for first time use. focus on / highlight the features that allow the user to quickly and easily get benefits for useful output from the system.
allow users to export any data they enter into a trial system - and indicate that this is possible/easy. you don't want them to be put off from trying something because of a potential for wasted effort.
avoid users being required to do lots of configuration before using a trial. prepopulate settings based on typical/common/popular settings. you may also want to consider having default settings for different types of usage. e.g. "If you want to see what the system is like for scenario X, use configuration J. If you want to see what the system is like for use case Y, use configuration K." where J & K are collections of settings best suited to a particular type of usage.
I'll speak from personal experience while evaluating trial applications.
The most annoying trial applications are those which keep popping up nag screens or constantly reminding me that I'm using a trial. Trials which act exactly like the real product from the beginning till the end of the trial period are just awesome. Limited features are annoying, the only exception I can think of when you could use it is where you have rarely used feature which would allow people to exploit the trial (by using this "once-in-lifetime" needed feature and uninstalling). If you have for example video editing software trial which puts "trial" watermark on output, I'd uninstall it as soon as I'd notice it. In my opinion trial should seamlessly integrate into user work-flow so that once the trial ends they would think "Hey, I have been using this awesome program almost each day since I got the trial, I absolutely have to buy it." Sure some people will exploit it, but at the end you should target the group which will use your product in daily work-flow instead of one time users. Even if user "trials" it 2 times per year, he will keep coming back to your product and might even buy it after 2nd or 3rd "one-time use".
(Sorry for the wall of the text and rant)
As for how to improve the first session. I usually find my way around programs easily, but one time only pop-up/screen (or with check-box to never show it again) with videos showing off best features and intended work-flow are quite helpful. Also links to sample documents might be helpful. If your application can self-present itself (for example slide-show about the your slide-show program) you could include such document. People don't like to read long and boring help files, but if you have designer in your team, you could ask him to make a short colourful intro pdf. Also don't throw all the features at the user at the same time. Split information into simple categories and if user is interested into one specific category keep feeding him more specific information. That's why videos are so good, with 3-6 x ~3-5 minute videos you can tell a lot. Also depending how complex your program is you could include picture with information where specific things are located on the screen.
Just my personal opinion, I have never made a trial myself. Hope it helps.
An interactive walk through/lab exercise that really highlights the major and exciting offerings of your application.
Example: Yahoo mail does the same when the users opt to use new mail interface
There are so many ways you can go with this. I still can't claim to have found the best approach.
However, my plan from the beginning with my online (Silverlight) software was to give away something thousands of people will find useful and can use for free. The free version is pretty well representative of the professional product, with only a few features missing that enhance productivity (I'm working on those professional features now). And then I do have a nag popup that comes up every 5 minutes suggesting that you should buy it. That popup can be dismissed as many times as you want. I know that popup will annoy some people but I suppose that's the trade off. There is no perfect plan. But I don't think the occasional nag popup scares that many people away, especially when it can be dismissed with a single click.
I was inspired by Balsamiq Mockups, which has been hugely successful over the past couple years. My trial/nag popup way of doing things was copied almost exactly from Balsamiq. I honestly don't know if this is the ideal plan, but it has obviously worked for them. By the way, I think another reason for Balsamiq's success is that the demo doesn't have to be downloaded & installed. Since the demo is in Flash, there's a very high conversion rate of users actually trying it and becoming addicted to it.

What kind of specs, documents, analysis do you get from superiors when starting a project? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I currently work in a small business (15-20 employees, 5 programmers) where most projects are custom built CMS and a few web applications products.
Since I started working there, I have worked on many projects, but specifications for each project vary a lot. Sometimes we get a little detail, a Word document telling what the client wants, and what we are suggesting (suggested form fields, a short description of display, etc.). Sometimes almost nothing except "do what you think is the best approach for this project/module/request".
My question to you guys, who might work in different kind of businesses, is: How (huge pile of paper? Word docs? Visios?) and what kind of information do you get from your superiors, managers, teamates when starting a project (plenty of analysis, drawings, etc.)? How much detail do you get on this?
Hope my question is clear enough, thank you.
Specs..that's kind of funny...how about never :(.
Seriously a lot of companies assume specs aren't needed, its absolutely unacceptable but this is how it is in a LOT of companies. They assume a one liner and the programmer knows what the program should do, the inputs / outputs and so on.
Unfortunately in my case I have to actually help write the specs..and Im the programmer :(.
I mostly get a lot of verbal direction and I use a voice recorder to record the conversation and transcribe it when I am done. I write my own specs from my customers' words.
Then, as a good consultant should, I take the writeup back to the customer and verify it, and get a signature and build it, and they live happily every after! (no they dont, they change their mind a 100 times)
It can vary depending on what group the work falls under:
Support request - If the change will take a short period of time and is fixing something broken, there is this group. This could be as simple as, "Add Bob to the list of authorized users for that ancient form" where the form is something written years ago and aside from adding and removing users, it isn't touched for fear of breaking things.
Service Advisory Committee request - Items that are up to a few days are in this group as these are kind of like mini-projects as the request may be to create a new form or portal for a group. This could be upgrading some 3rd party software where we have some customizations that make the upgrade not necessarily a simple thing for Operations to do.
Project - In this case there are usually a few Word documents and/or e-mail threads that help nail down requirements in terms of scope, budget, and time. These can take months though there is something to be said for having a prototype to change rather than creating the initial prototype to tell if requirements are really met or not. Course my current project is over a year old, still has a few more months to the timeline and already has a successor coming after it is done,i.e. there is a Phase II to go after Phase I.
Uber project - These merit their own group of documentation and are the million dollar, multiple company projects that usually try to document everything up front rarely works out well here. Thus, there is some adoptioon of agile for these but there are still some growing pains to go through as how we use agile matures. Think installing a dozen modules of some off-the-shelf software that requires both internal and external developers to customize the suite for our specific needs as the software is supposed to be very robust, flexible and help save lots of time and money on how people otherwise do their jobs generally. Think ERP or CRM for a couple of examples here.
We are a 16-person company that creates and supports customized software for small retail shop owners.
The projects we get fall into three general categories (as related to specs):
"Here, automate this form." A sales person explains that our customer only wants this form to appear where they can fill it out and print it to make it look professional to their customer. Our specs is a single piece of paper that looks something like an order form or report. This is always false; they want pop-up lookups, automatic updating from other sources, and "while you're at it" add-ons that more than double the time. These, we've learned to just live in the moment and let the project take its course. By the time we're done, the program doesn't look anything like their original form.
Small changes. Like a simple e-mail explaining that the background color is stale, or a request to sort a report by a different column. These, we just do as time allows.
Big company integrations, where we're tasked with making our software work with some big outfit like Intuit (QuickBook) or FedEx (shipping rates). These often have well thought out documentation and sample code. We get 100's of pages in word documents or pdfs. The problem with these is when their specs are wrong. We find out about inaccuracies when we try to test or certify our integration. In these instances, we usually take longer in certification than we did to originally develop the processes.
In all cases, the real trouble is when a sales person promises a solution to the customer before even asking a programmer what it would take. As recently as 2 weeks ago, a sales person got into real trouble and had to issue a refund (that person is no longer with the company).
None - at least not from management.
Instead, as a developer (and particularly one leading a software project right now), I'm expected to contact my users/customers/etc and work directly with them to come up with our specifications and requirements. The documentation I do request from my team is only what will be useful to the team. I am lucky in that management rarely requests a document that doesn't make sense or won't provide some use to our project.
I currently have a half-dozen or so specs each 60-80 pages. One of them is 80 pages with no table of contents. Good times.
Our Product Managers and senior engineers prepare three planning docs for our data management software projects.
High-level requirements: 1-to-3 sentence descriptions of hardware/software supported or specific feature for this project. (10-15 pages of Excel-like grids)
Technical details: Engineering implementation of each high-level requirements. Up to a page for each, depending on amount of detail. (30-40 pages of filled-in feature details)
Business agreement: Summary of 1 & 2 with engineering schedule and Product Mgmt's market analysis. Everyone signs off on this. (5 pages analysis, 20 technical)
I haven't seen work flows or other Visio-like details in our specs. The prioritized requirements and schedule prove critical, so we understand when to lop things off to save development and testing time.

Resources