Related
I am creating a Ruby On Rails website, and for one part it needs to be dynamic so that (sorta) trusted users can make parts of the website work differently. For this, I need a scripting language. In a sort of similar project in ASP.Net, I wrote my own scripting language/DSL. I can not use that source code(written at work) though, and I don't want to make another scripting language if I don't have to.
So, what choices do I have? The scripting must be locked down and not be able to crash my server or anything. I'd really like if I could use Ruby as the scripting language, but it's not strictly necessary. Also, this scripting part will be called on almost every request for the website, sometimes more than once. So, speed is a factor.
I looked at the RubyLuaBridge but it is Alpha status and seems dead.
What choices for a scripting language do I have in a Ruby project?
Also, I will have full control over where this project is deployed(root access), so there are no real limits..
There's also Rufus-lua though it's at version 0.1.0...
What about JRuby? You can use java implementation of many scripting language, such as javascript, scheme etc
Well, since it hasn't been suggested yet, there's Locking Ruby In The Safe as described by the Pickaxe book. This allows you to use Ruby as the language without significant slowdown AFAIK.
This technique is intended to allow safe sandboxing of untrusted Ruby code and bug fixes and discussions are directed toward keeping it that way, but infinite loops and some other things still allow malicious users to peg the CPU. (e.g. this discussion maybe.)
What I don't know is how you return data that is inherently safe to use from outside the safe thread. A singleton object (for instance) can mimic whatever class and then do something dangerous when any method is called in the returning thread. I'm still googling around about it. (The Ruby Programming Language says that level 4 "Prevents metaprogramming methods" which would allow you to safely verify the class of a returned object, which I suppose would make results safe to use.)
Barring that, it might not be hard (*snrk*) to implement a Lisp-1 with dynamic scope since you already have a garbage collector.
My boss believes that wizards make things simple for the user.
I think they have their place but I can't really define what that place is.
I feel there is a danger in turning something into steps that doesn't need them.
Does anyone know where I could find rules for such things, or even a guideline to follow that describes when and when not to use wizards and possibly even other UI elements.
Here is what some common Human Interface Guidelines have to say about when to use them. Most are quite restrictive:
Gnome HIG
An assistant is a secondary window that guides the user through an operation by breaking it into sequential steps. Assistants are useful for making complex operations less intimidating, as they restrict the information visible to the user at any given moment.
[...]
Assistants do have major downsides. After using an assistant it is often hard to figure out where the individual settings aggregated into the assistant are stored. Often people will resort to re-running the assistant, re-entering many settings that they don't want to change.
Assistants are often used in situations where a better solution would be to simplify, or even better automate, the process. Before using an assistant to step people through a complex operation, consider if the operation can be fundamentally simplified so an assistant is unnecessary.
Microsoft Windows Experience Interaction Guidelines:
Consider lightweight alternatives first, such as dialog boxes, task panes, or single pages. Wizards are a heavy UI, best used for multi-step, infrequently performed task. You don't have to use wizards—you can provide helpful information and assistance in any UI.
Apple Human Interface Guidelines
For products with complex setup procedures, a setup assistant can be helpful
(Assistants are not mentioned in any other context, as in the other HIG:s, so I assume that means that Apple think they have no place except for setup)
I'd agree with you that Wizards have their place. And that place is back in Azeroth.
No, but seriously, if the user has to input a lot of different data fields, using a Wizard to split up the data entry into several related groups might help to make things less confusing.
If the Wizard covers a process that consists of steps A, B, and C, and the input at B or C depends on the input at the previous step(s), a Wizard would probably be a good way to structure your application.
There are probably a lot of other situations in which using a Wizard would be warranted (those are just two off the top of my head), but in each case, you'd want to evaluate it and make sure that a Wizard is the absolute best option. To borrow an old saying, everything doesn't become a nail just because your boss wants you to use Wizards as a hammer. If that makes sense.
As far as best practices guidelines goes -- the use of Wizards seems to fall under UX rather than UI, but here's a few items that I came across:
Wizard-style forms best practices
Designing Effective Wizards: A Multidisciplinary Approach (Book)
Best Practice: Designing Wizards
Try reading this.
I would suggest to avoid wizards as much as possible. People have a short attention span and you risk that, at the middle of it, they start forgetting what the said, what they are doing there, etc.
That being said, i think that it may be viable when performing some shopping (e.g., checkout), first-time configurations, others?
When to Develop a Wizard
Always try to:
Only ask the information really needed
Simplify as much as you can, thus avoiding the need to additional explanation
When creating a wizard:
Clearly show the how many steps are needed and how many are completed
Allow the user to revert or cancel it
I want to know if there are method to quickly find bugs in the program.
It seems that the more you master the architecture of your software, the more quickly
you can locate the bugs.
How the programmers improve their ability to find a bug?
Logging, and unit tests. The more information you have about what happened, the easier it is to reproduce it. The more modular you can make your code, the easier it is to check that it really is misbehaving where you think it is, and then check that your fix solves the problem.
Divide and conquer. Whenever you are debugging, you should be thinking about cutting down the possible locations of the problem. Every time you run the app, you should be trying to eliminate a possible source and zero in on the actual location. This can be done with logging, with a debugger, assertions, etc.
Here's a prophylactic method after you have found a bug: I find it really helpful to take a minute and think about the bug.
What was the bug exactly in essence.
Why did it occur.
Could you have found it earlier, easier.
Anything else you learned from the bug.
I find taking a minute to think about these things will make it far less likely that you will produce the same bug in the future.
I will assume you mean logic bugs. The best way I have found to capture logic bugs is to implement some sort of testing scheme. Check out jUnit as the standard. Pretty much you define a set of accepted outputs of your methods. Every time you compile your system it checks all of your test cases. If you have introduced new logic that breaks your tests, you will know about it instantly and know exactly what you have to fix.
Test driven design is a pretty big movement in programming right now. You will be hard pressed to find a language that doesn't support some kind of testing. Even JavaScript has a multitude of test suites.
Experience makes you a better debugger. Pay close attention to the bugs that you AND others commonly make. Try to figure out if/how these bugs apply to ALL code that affects you, not the single instance of where the bug was seen.
Raymond Chen is famous for his powers of psychic debugging.
Most of what looks like psychic
debugging is really just knowing what
people tend to get wrong.
That means that you don't necessarily have to be intimately familiar with the architecture / system. You just need enough knowledge to understand the types of bugs that apply and are easy to make.
I personally take the approach of thinking about where the bug may be in the code before actually opening up the code and taking a look. When you first start with this approach, it may not actually work very well, especially if you are pretty unfamiliar with the code base. However, over time someone will be able to tell you the behavior they are experiencing and you'll have a good idea where the problem is located or you may even know what to fix in the code to remedy the problem before even looking at the code.
I was on a project for several years that maintained by a vendor. They were not very good debuggers and most of the time it was up to us to point them to an area of the code that had the problem. What made our problem worse was that we didn't have a nice way to view the source code, so a lot of our "debugging" was just feeling.
Error checking and reporting. The #1 newbie coder debugging mistake is to turn off error reporting, avoid checking for whether what's going on makes sense, etc etc. In general, people feel like if they can't see anything going wrong then nothing is going wrong. Which of course could not be further from the case.
Instead, your code should be chock full of error conditions that will make lots of noise, with detailed reporting, someplace you will see it. (This doesn't mean inside a production web page.) Then, instead of having to trace an error all over the place because it got passed through sixteen layers of execution before it finally got someplace that broke, your errors start happening proximately to the actual issue.
It seems that the more you master the
architecture of your software ,the
more quickly you can locate the bugs.
After understanding the architecture, one's ability to find bugs in the application increases with their ability to identify and write extensive tests.
Know your tools.
Make sure that you know how to use conditional breakpoints and watches in your debugger.
Use static analysis tools as well - they can point out the more obvious issues.
Sleep and rest.
Use programming methods that produce fewer bugs in the first place.
If to implement a single stand-alone functional requirement it takes N separate point-edits to source code, the number of bugs put into the code is roughly proportional to N, so find programming methods that minimize N. Ways to do this: DRY (don't repeat yourself), code generation, and DSL (domain-specific-language).
Where bugs are likely, have unit tests.
Obviously.IMHO, the best unit tests are monte-carlo.
Make intermediate results visible.
For example, compilers have intermediate representations, in the form of 4-tuples. If there is a bug, the intermediate code can be examined. That tells if the bug is in the first or second half of the compiler.
P.S. Most programmers are not aware that they have a choice of how much data structure to use. The less data structure you use, the less are the chances for bugs (and performance issues) caused by it.
I find tracepoints to be an invaluable debugging tool. They are a bit like logging, except you create them during a debugging session to solve a particular issue, like breakpoints.
Printing the stacktrace in a tracepoint can be especially useful. For example, you can print the hash code and stacktrace in the constructor of an object, and then later on when the object is used again you can search for its hashcode to see which client code created it. Same for seeing who disposed it or called a certain method etc.
They are also great for debugging issues related to window focus changes etc, where the debugger would interfere if you drop in break mode.
Static code tools like FindBugs
Assertions, assertions, and assertions.
Some areas of our code has 4 or 5 assertions for each line of real code. When we get a bug report the first thing that happens is that the customer data is processed in our debug build 99 times out a hundred an assert will fire near the cause of the bug.
Additionally our debug build perform redundant calculations to ensure that an optimized algorithm is returning the correct result, and also debug functions are used to examine the sanity of data structures.
The hardest thing new developers have to contend with is getting their code to survive the assertions of the code gthey are calling.
Additionally we do not allow any code to be putback to toplevel that causes any integration or unit test to fail.
Stepping through the code, examining flow/state where unexpected behavior is occurring. (Then develop a test for it, of course).
Writing Debug.Write(message) in your code and using DebugView is another option. And then run your application find out what is going on.
"Architecture" in software means something like:
Several components
The components interact across clearly-defined interfaces
Each component has a well-defined responsibility
The responsibility of one component is unlike the responsibilities of other components
So, as you said, the better the architecture the easier it is to find bugs.
First: knowing the bug, you can decide which functionality is broken, and therefore know which component implements that functionality. For example, if the bug is that something isn't being logged properly, therefore this bug should be in one of 3 places:
In the component that's responsible for logging (your logging library)
Or, above that in the application code which is using this library
Or, below that in the system code which this library is using
Second: examine the data transfered across the interfaces between components. To continue the previous example above:
Set a debugger breakpoint on the application code which invokes the logger API, to verify whether the logger API is being used correctly (e.g. whether it's being invoked at all, whether parameters are as-expected, etc.).
Doing this tells you whether the bug is in the component above this interface, or in the component that's below this interface.
Repeat (perhaps using binary search if the call stack is very deep) until you've found which component is at fault.
When you come to the point that you think there must be a bug in the OS, check your assertions -- and put them into the code with "assert" statements.
Conversely, as you are writing the code, think of the range of valid inputs for your algorithms and put in assertions to make sure you have what you think you have. Same goes for output: Check that you produced what you think you produced.
E.g. if you expect a non-empty list:
l = getList(input)
assert l, "List was empty for input: %s" % str(input)
I'm part of the QA team # work, and knowing anything about the product and how it is developed, helps a lot in finding bugs, also when I make new QA tools I pass it to our dev team to test it, finding bugs in your own code is just plain hard!
Some people say programmers are tainted, so we cannot see bugs in their own product; we are not talking about code here, we are beyond that, usability and functionality itself.
Meanwhile unit testing seams to be a nice solution to find bugs in your own code, its totally pointless if you're wrong even before writing the unit test, how are you going to find the bugs then? you don't!, let your co-worker find them, hire a QA guy.
Scientific debugging is what I always used, and it greatly helps.
Basically, if you can replicate a bug, you can track its origin. You should then experiment some tests, observe the results, and infer hypotheses on why the bug happens.
Writing about all your hypotheses, attempts, expected results and observed results can help you track down the bugs, particularly if they're nasty.
There are automated tools that can help you with that process, particularly git-bisect (and similar bisection tools on other revision systems) to quickly find which change introduced the bug, unit testing to reproduce a bug and prevent regressions in your code (can be used in combination with bisect), and delta debugging to find the culprit in your code (similar to git-bisect but whereas git-bisect works on the code history, delta debugging works on the code directly).
But whatever the tools you are using, the most important benefit is in the scientific methodology, as this is the formalization of what most experienced debuggers do.
I definitely appreciate a good interface and as a developer, I try to create them for my users. But appreciating a good interface and designing one are a different thing. I'm looking for good interfaces (such as IMHO StackOverflow, Gmail) as examples of good UI from which I can model my own UI's.
I personally think that Netflix has an excellent web UI. Responsive, easy to navigate. Not mutch CRUD going on, but I find it very comfortable.
Pretty much anything by google, really. They're all very simple and to the point, focusing on usability.
You should get yourself a copy of both Don't Make Me Think and The Non-Designer's Design Book for your base knowledge/insight.
From there, it's much easier for you to dissect and analyze the layouts you already know and like, and recreate them for your own amusement.
edit: To mitigate misunderstanding, the point I'm trying to make is that you probably don't need as many good examples of nice layouts, if you know what to look for. For example, I can be shown a thousand haute couture dresses, and I still couldn't make one myself, because I don't know what to look for.
My favorites
Stack Overflow: This is a WIKI so it's not a rep point grab. I just really love the interface on this site. Been to too many crappy Q/A sites
Google Reader
MSDN: It's gotten a ton better in recent years and is a great way to grab little esoteric details about various APIs
iStockPhoto.com it's simple, effective and handles a large amount of information and data without getting bogged down. It also doesn't get in the way of the info you are looking for.
A good user interface fulfills a specific need of its users effectively.
As an example, here is a site (translation) that I have created for finding out what food is available in the cafeterias of the University of Helsinki. The typical use case is that when a student is hungry, he needs to know what food is available in the neighborhood student cafeterias (which are cheap for students), so that he can choose where to eat and what. He knows where each of those cafeterias is, but does not know what food they have today.
That site shows all the needed information at once. Because the students typically have a couple of cafeterias where they go, they can either bookmark the page with those cafeterias selected, or save the selection as a cookie. After that they can reach their goal without any navigation on the web site.
I don't use it on a day-to-day basis, but I'm very impressed with the Perseus Project digital library.
Here's a link to a poem from Catullus' Carmina in Latin as an example of the interface. Some features that I really like:
Click on the bar near the top to jump to any poem in the work. Larger chunks of the bar represent larger sections of the work (poems, chapters, however that particular work is logically broken up by the author).
Click on a Latin word in the poem to bring up a window (be patient; it seems to take a while) with lexicon entries, user voting and statistics on the word form (i.e. what the inflection means in the context of the sentence; it can be ambiguous in Latin) and so forth.
There are a number of resources down the right column, including various English translations, notes, references, etc. Any of them can be either shown in the right column, or swapped out with whatever is in the main content area in the center.
One of my personal favs: newspond.com
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
What is your best practical user-friendly user-interface design or principle?
Please submit those practices that you find actually makes things really useful - no matter what - if it works for your users, share it!
Summary/Collation
Principles
KISS.
Be clear and specific in what an option will achieve: for example, use verbs that indicate the action that will follow on a choice (see: Impl. 1).
Use obvious default actions appropriate to what the user needs/wants to achieve.
Fit the appearance and behavior of the UI to the environment/process/audience: stand-alone application, web-page, portable, scientific analysis, flash-game, professionals/children, ...
Reduce the learning curve of a new user.
Rather than disabling or hiding options, consider giving a helpful message where the user can have alternatives, but only where those alternatives exist. If no alternatives are available, its better to disable the option - which visually then states that the option is not available - do not hide the unavailable options, rather explain in a mouse-over popup why it is disabled.
Stay consistent and conform to practices, and placement of controls, as is implemented in widely-used successful applications.
Lead the expectations of the user and let your program behave according to those expectations.
Stick to the vocabulary and knowledge of the user and do not use programmer/implementation terminology.
Follow basic design principles: contrast (obviousness), repetition (consistency), alignment (appearance), and proximity (grouping).
Implementation
(See answer by paiNie) "Try to use verbs in your dialog boxes."
Allow/implement undo and redo.
References
Windows Vista User Experience Guidelines [http://msdn.microsoft.com/en-us/library/aa511258.aspx]
Dutch websites - "Drempelvrij" guidelines [http://www.drempelvrij.nl/richtlijnen]
Web Content Accessibility Guidelines (WCAG 1.0) [http://www.w3.org/TR/WCAG10/]
Consistence [http://www.amazon.com/Design-Everyday-Things-Donald-Norman/dp/0385267746]
Don't make me Think [http://www.amazon.com/Dont-Make-Me-Think-Usability/dp/0321344758/ref=pdbbssr_1?ie=UTF8&s=books&qid=1221726383&sr=8-1]
Be powerful and simple [http://msdn.microsoft.com/en-us/library/aa511332.aspx]
Gestalt design laws [http://www.squidoo.com/gestaltlaws]
I test my GUI against my grandma.
Try to use verbs in your dialog boxes.
It means use
instead of
Follow basic design principles
Contrast - Make things that are different look different
Repetition - Repeat the same style in a screen and for other screens
Alignment - Line screen elements up! Yes, that includes text, images, controls and labels.
Proximity - Group related elements together. A set of input fields to enter an address should be grouped together and be distinct from the group of input fields to enter credit card info. This is basic Gestalt Design Laws.
Never ask "Are you sure?". Just allow unlimited, reliable undo/redo.
Try to think about what your user wants to achieve instead of what the requirements are.
The user will enter your system and use it to achieve a goal. When you open up calc you need to make a simple fast calculation 90% of the time so that's why by default it is set to simple mode.
So don't think about what the application must do but think about the user which will be doing it, probably bored, and try to design based on what his intentions are, try to make his life easier.
If you're doing anything for the web, or any front-facing software application for that matter, you really owe it to yourself to read...
Don't make me think - Steve Krug
Breadcrumbs in webapps:
Tell -> The -> User -> Where -> She -> Is in the system
This is pretty hard to do in "dynamic" systems with multiple paths to the same data, but it often helps navigate the system.
I try to adapt to the environment.
When developing for an Windows application, I use the Windows Vista User Experience Guidelines but when I'm developing an web application I use the appropriate guidelines, because I develop Dutch websites I use the "Drempelvrij" guidelines which are based on the Web Content Accessibility Guidelines (WCAG 1.0) by the World Wide Web Consortium (W3C).
The reason I do this is to reduce the learning curve of a new user.
I would recommend to get a good solid understanding of GUI design by reading the book The Design of Everyday Things. Although the main printable is a comment from Joel Spolsky: When the behavior of the application differs to what the user expects to happen then you have a problem with your graphical user interface.
The best example is, when somebody swaps around the OK and Cancel button on some web sites. The user expects the OK button to be on the left, and the Cancel button to be on the right. So in short, when the application behavior differs to what the user expects what to happen then you have a user interface design problem.
Although, the best advice, in no matter what design or design pattern you follow, is to keep the design and conventions consistent throughout the application.
Avoid asking the user to make choices whenever you can (i.e. don't create a fork with a configuration dialog!)
For every option and every message box, ask yourself: can I instead come up with some reasonable default behavior that
makes sense?
does not get in the user's way?
is easy enough to learn that it costs little to the user that I impose this on him?
I can use my Palm handheld as an example: the settings are really minimalistic, and I'm quite happy with that. The basic applications are well designed enough that I can simply use them without feeling the need for tweaking. Ok, there are some things I can't do, and in fact I sort of had to adapt myself to the tool (instead of the opposite), but in the end this really makes my life easier.
This website is another example: you can't configure anything, and yet I find it really nice to use.
Reasonable defaults can be hard to figure out, and simple usability tests can provide a lot of clues to help you with that.
Show the interface to a sample of users. Ask them to perform a typical task. Watch for their mistakes. Listen to their comments. Make changes and repeat.
The Design of Everyday Things - Donald Norman
A canon of design lore and the basis of many HCI courses at universities around the world. You won't design a great GUI in five minutes with a few comments from a web forum, but some principles will get your thinking pointed the right way.
--
MC
When constructing error messages make the error message be
the answers to these 3 questions (in that order):
What happened?
Why did it happen?
What can be done about it?
This is from "Human Interface Guidelines: The Apple Desktop
Interface" (1987, ISBN 0-201-17753-6), but it can be used
for any error message anywhere.
There is an updated version for Mac OS X.
The Microsoft page
User Interface Messages
says the same thing: "... in the case of an error message,
you should include the issue, the cause, and the user action
to correct the problem."
Also include any information that is known by the program,
not just some fixed string. E.g. for the "Why did it happen" part of the error message use "Raw spectrum file
L:\refDataForMascotParser\TripleEncoding\Q1LCMS190203_01Doub
leArg.wiff does not exist" instead of just "File does
not exist".
Contrast this with the infamous error message: "An error
happend.".
(Stolen from Joel :o) )
Don't disable/remove options - rather give a helpful message when the user click/select it.
As my data structure professor pointed today: Give instructions on how your program works to the average user. We programmers often think we're pretty logical with our programs, but the average user probably won't know what to do.
Use discreet/simple animated features to create seamless transitions from one section the the other. This helps the user to create a mental map of navigation/structure.
Use short (one word if possible) titles on the buttons that describe clearly the essence of the action.
Use semantic zooming where possible (a good example is how zooming works on Google/Bing maps, where more information is visible when you focus on an area).
Create at least two ways to navigate: Vertical and horizontal. Vertical when you navigate between different sections and horizontal when you navigate within the contents of the section or subsection.
Always keep the main options nodes of your structure visible (where the size of the screen and the type of device allows it).
When you go deep into the structure always keep a visible hint (i.e. such as in the form of a path) indicating where you are.
Hide elements when you want the user to focus on data (such as reading an article or viewing a project). - however beware of point #5 and #4.
Be Powerful and Simple
Oh, and hire a designer / learn design skills. :)
With GUIs, standards are kind of platform specific. E.g. While developing GUI in Eclipse this link provides decent guideline.
I've read most of the above and one thing that I'm not seeing mentioned:
If users are meant to use the interface ONCE, showing only what they need to use if possible is great.
If the user interface is going to be used repeatedly by the same user, but maybe not very often, disabling controls is better than hiding them: the user interface changing and hidden features not being obvious (or remembered) by an occasional user is frustrating to the user.
If the user interface is going to be used VERY REGULARLY by the same user (and there is not a lot of turnover in the job i.e. not a lot of new users coming online all the time) disabling controls is absolutely helpful and the user will become accustomed to the reasons why things happen but preventing them from using controls accidentally in improper contexts appreciated and prevents errors.
Just my opinion, but it all goes back to understanding your user profile, not just what a single user session might entail.