Been reading a lot into UI design lately and Fitt's Law keeps popping up.
Now from what I gather its basically the larger an item is, and the closer it is to your cursor, the easier it is to click on.
So what about touch screen devices where the input comes from multiple touches or just single touches.
What are the fundamentals to take into account considering this?
Should it be something like, the hands of the user are on the sides of the device so the buttons should be close to the left and right hand sides of the device?
Thanks
I started thinking about this recently too, and here some considerations:
Fitts' law was developed in the 50's as a human factors model (read: controls for fighter plane cockpits) so seeing it re-applied to human motor skills is actually just coming full circle. It definitely applies to mobile devices. [Historical note: The finding that it applied to mouse interfaces was actually a big deal at the time.]
One thing to note is that the Fitts'-endowed advantages of the edges and especially corners of the screen no longer exist on a touch interface: the "infinite size" only applies to mousing interfaces since the cursor cannot move past the edges. Obviously, the same limitation does not exist for our fingers. Basically, the edges are no better than the middle of the screen except for the potentially shorter distance to the target.
Here 1 (pdf) is an '06 study about optimal target sizes for one-handed thumb use, taking into account freedom of movement and such. I was hoping to find a paper that would be able to provide a modification or a new constant to Fitts' law for accuracy of the touch interface but a cursory search didn't turn one up. I guess that means I found a potential research topic ;)
I think one general conclusion to be made based on application to Fitts' law to smaller-screened mobiles is that it's hard to make usable widget-based interfaces without seriously sacrificing information density. One interesting alternative is gesture-based interfaces (beyond the popular pinch and zoom). Unfortunately, the lack of popularity and conventions makes the learning curve rather high. Mobiles are definitely one place that it might be worth the trade-off, though. I predict wider adoption of gesture interfaces on mobiles once conventions stabilize.
Yes, for a touch screen Fitts' law has to be applied in three dimensions, so it's different from the classical mouse movement considerations.
As you say, the origin of the movement is often the default position of the finger. This varies a lot depending on the device where the screen is mounted. On a hand held device you might use the index finger of one hand, or the thumbs of both hands, depending on the design.
Also, on a touch screen you have to move the fingers away from the screen to see it, which makes the distance between controls less important as you move back to the default position between clicks.
What to consider besides Fitts' law is the intuitiveness of the interface. If a button appears where it's not expected, it doesn't matter how close it is, it will still take time to find it.
One specific idea that attempts to leverage Fitts law is to put the most often used controls at the bottom of the screen (i.e., the opposite of current GUI conventions with the menubar and toolbar). This allows users to touch multiple controls in sequence without withdrawing their hands to see the effects, shortening the mean distance moved between inputs. For a tablet, kiosk, or desktop device, the bottom of the screen is probably also the hands’ “rest” position. However, there is the potential problem of the most important controls being the last thing the users sees when scanning the display.
Fitt's Law "predicts that the time required to rapidly move to a target area is a function of the distance to and the size of the target." What's important isn't that Fitts discovered this (it is obvious) what he noticed was that the increase due to distance and size fit a logarithmic formula, which the law models.
On a Windows-Icon-Menu-Pointer (WIMP) system, what's important is that you have 1 location with zero distance (where the cursor is currently) and 4 locations that are of infinite size (the edges of the screen, which the pointer cannot extend beyond). That's really why fitts law pops up so much in UI design (aside from giving weight to things like "Don't make tiny buttons", etc)
But the law makes a lot of assumptions about the range of motion you have available with your hands. If you're holding a tablet with two hands, the law goes out the window. If you're holding it with your left hand, then things on the right side will be easier to reach, etc. So its going to be a lot harder to make generalizations than with a pointer.
That said:
Think about where the users hands are going to be, and if they're both going to be free or not. Place buttons closest to where you think hands will be.
Cluster buttons such that you aren't requiring the user to make a variety of successive taps that are far apart (unless, of course, you're designing a game, in which case that's part of the skill)
Well, you should design for the most important fingers, after all (index, for example). Not that you shouldn't use the others, of course, but people generally are more geared toward using some fingers in detriment of others.
I don't think you can provide a general answer that will work across all sizes and types of touch screens. For example: The infrared vision technology on the Microsoft Surface can fail if a user has extremely dark finger tips (very very rare), but this would not be an issue on a capacitance based touchscreen.
The best practice to implement is lots of testing, with a variety of users. You will quickly learn what works on you device and what doesn't.
I did a paper on this for my graduate human computer interaction class using evolutionary computation to design a more efficient keyboard based on the domain of text that was being typed. I really should release it as an iphone/droid app.
Related
I'm interested in developing a semi-autonomous RC lawnmower.
That is, the operator would decide when to stop, turn, etc., but could request "slightly overlap previous cut" and the mower would automatically do so. (Having operated high-end RC mowers at trade shows, this is the tedious part. Overcoming that, plus the high cost -- which I believe is possible -- would make a commercial success.)
This feature would require accurate horizontal positioning. I have investigated ultrasonic, laser, optical, and GPS. Each has its problems in this application. (I'll resist the temptation to go off on these tangents here.)
So... my question...
I know GPS horizontal accuracy is only 3-4m. Not good enough, but:
I don't need to know where I am on the planet. I only need to know where I am relative to where I was a minute ago.
So, my question is, is the inaccuracy consistent in the short term? if so, I think it would work for me. If it varies wildly by +- 1.5m from one second to the next, then it will not work.
I have tried to find this information but have had no success (possibly because of the ubiquity of other GPS-accuracy discussion), so I appreciate any guidance.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Edit ~~~~~~~~~~~~~~~~~~~~~~
It's looking to me like GPS is not just skewed but granular. I'd be interested in hearing from anyone who can give better insight into this, but for now I'm going to explore other options.
I realized that even though my intended application is "outdoor", this question is technically in the field of "indoor positioning systems" so I am adding that tag.
My latest thinking is to have 3 "intelligent" high-dB ultrasonic (US) speaker units. The mower emits RF requests for a tone from each speaker in rapid sequence, measuring the time it takes to "hear" each unit's response, thereby calculating distance to each of these fixed point and using trilateration to get position. if the fixed-point speakers are 300' away from the mower, the mower may have moved several feet between the 1st and 3rd response, so this would have to be allowed for in the software. If it is possible to differentiate 3 different US frequencies, they could be requested/received "simultaneously". Though you still run into issues when you're close to one fixed unit and far from another. So some software correction may still be necessary. If we can assume the mower is moving in a straight line, this isn't too complicated.
Another variation is the mower does not request the tones. The fixed units send RF "here comes tone from unit A" etc., and the mower unit just monitors both RF info and US tones. This may simplify things somewhat, but it seems it really requires the ability to determine which speaker a tone is coming from.
This seems like the kind of thing you could (and should) measure empirically. Just set a GPS of your liking down in the middle of a field on a clear day and wait an hour. Then come back and see what you find.
Because I'm in a city, I can't run out and do this for you. However, I found a paper entitled iGeoTrans – A novel iOS application for GPS positioning in geosciences.
That includes this figure which duplicates the test I propose. You'll note that both the iPhone4 and Garmin eTrex10 perform pretty poorly versus the accuracy you say you need.
But the authors do some Math Magic™ to reduce the uncertainty in the position, presumably by using some kind of averaging. That gets them to a 3.53m RMSE measure.
If you have real-time differential GPS, you can do better. But this requires relatively expensive hardware and software.
Even aside from the above, you have the potential issue of GPS reflection and multipath error. What if your mower has to go under a deck, or thick trees, or near the wall of a house? These common yard features will likely break the assumptions needed to make a good averaging algorithm work and even frustrate attempts at DGPS by blocking critical signals.
To my mind, this seems like a computer vision problem. And not just because that'll give you more accurate row overlaps... you definitely don't want to run over a dog!
In my opinion a standard GPS is no way accurate enough for this application. A typical consumer grade receiver that I have used has a position accuracy defined as a CEP of 2.5 metres. This means that for a stationary receiver in a "perfect" sky view environment over time 50% of the position fixes will lie within a circle with a radius of 2.5 metres. If you look at the position that the receiver reports it appears to wander at random around the true position sometimes moving a number of metres away from its true location. When I have monitored the position data from a number of stationary units that I have used they could appear to be moving at speeds of up to 0.5 metres per second. In your application this would mean that the lawnmower could be out of position by some not insignificant distance (with disastrous consequences for your prized flowerbeds).
There is a way that this can be done, as has been proved by the tractor manufacturers who can position the seed drills and agricultural sprayers to millimetre accuracy. These systems use Differential GPS where there is a fixed reference station positioned in the neighbourhood of the tractor being controlled. This reference station transmits error corrections to the mobile unit allowing it to correct its reported position to a high degree of accuracy. Unfortunately this sort of positioning system is very expensive.
I want to make a game card battle based game. In this cards have specific attributes which can increase player's hp/attack/defense or attack an enemy to reduce his hp/attack/defense
I am trying to make an AI for this game. AI has to predict which card it will select on the basis of current situations like AI's hp/attack/defense and Enemy's hp/attack/defense. Since AI cannot see enemy's card hence it cannot predict future moves.
I searched few techniques of AI like minmax but I think minmax will not be suitable since AI cannot predict any future moves.
I am searching for a technique which is very flexible so that i can add a large variety of cards later.
Can you please suggest a technique for such game.
Thanks
This isn't an ActionScript 3 topic per se but I do think it's rather interesting.
First I'd suggest picking up Yu-Gi-Oh's Stardust Accelerator World Championship 2009 for the Nintendo DS or a comparable game.
The game has a fairly advanced computer AI system that not only deals with expected advantage or disadvantage in terms of hit points but also card advantage and combos. If you're taking on a challenge like this, I definately recommend you do the required research (plus, when playing video games is research, who can complain?)
My suggestion for building an AI is as follows:
As the computer decides its move, create an array of Move objects. Then have it create a new Move object for each possible Move that it can see.
For each move object, calculate how much less HP the opponent will have, how many cards they will still have, how many creatures,etc.
Have the computer decide what's most important (more damage, more card advantage) and have it play that move.
More sophisticated AI's will also think several turns in advance and perhaps "see" moves that others do not.
I suggest you look at this game of Reversi I built a few weeks back for fun in Flash. This has a very basic AI implemented, but the basics could be applied to your situation.
Basically, the way that game works is after each move (player or CPU, so I can determine if the player made the right move in comparison to what the CPU would have made), I create a Vector of each possible legal move. I then decide which move provides the highest score change, and set that as best move. However, I also check to see if the move would result in the other player having access to a corner (if you've never played, the player who grabs the corners generally wins). If it does, I tell the CPU to avoid that move and check the second best move and so on. The end result is a CPU who can actually put up a fight.
Keep in mind that this is just a single afternoon of work (for the entire game, from the crappy GUI to the functionality to the AI) so it is very basic and I could do things like run future possible moves through the check sequence as well. Fun fact, though, my moves (which I based the AI on obviously) are the ones the CPU would make nearly 80% of the time. The only time that does not hold true is when I play the game like you would Chess, where your move is made solely to position a move four turns down the line
For your game, you have a ton of variables to consider and not a single point scale as I had. I would suggest listing out each thing and applying a point value to each thing so you can apply importance values to each one. I did something similar for a caching system that automatically determines what is the most important file to keep based on age, usage, size, etc. You then look at each card in the CPU's hand, calculate what each card's value is and play that card (assuming it is legal to do so, of course).
Once you figure that out, you can look into things like what each move could do in the next turn (i.e. "damage" values for each move). And once that is done, you could add functionality to it that would let the CPU make strategic moves that would allow them to use a more powerful card or perform a "finishing" move or however it works in the end.
Again, though, keep it to a simple point based system and keep going from there. You need something you can physically compare, so sticking to a point based system makes that simple.
I apologize for the length of this answer, but I hope it helps in some way.
I have been trying to write a platformer engine for a few times now. The thing is I am not quite satisfied with my implementation details on solid objects. (wall, floor, ceiling) I have several scenario I would like to discuss.
For a simple platformer game like the first Mario, everything is pretty much blocks. A good implementation should only check for necessary collision, for instance, if Mario is running and at the end of the way, there is a cliff, how should we check for collision efficiently? Should we always check on every step Mario is taking to see whether his hitbox is still on the ground? Or is there some other programming way that allows us to not handle this every frame?
But blocks are boring, let's put in some slopes. Implementation details-wise, how should slopes be handled? Some games such as Sonic, have this loop structure that the character can go "woohoo" in the loop and proceed.
Another scenario is "solid" objects (floor, ceiling, wall) handling. In Megaman, we can see that the player can make himself go through the ceiling by using a tool to go into the solid "wall". Possibly, the programming here is to force the player to go out of the wall so that the player is not stuck, by moving the player quickly to the right. This is an old "workaround" method to avoid player stucking in wall. In newer games these days, the handle is more complex. Take, for instance, Super Smash Brawl, where players can enlarge the characters (along with their hitbox) The program allows the player to move around "in" the ceiling, but once the character is out of the "solid" area, they cannot move back in. Moreover, sometimes, a character is gigantic that they go through 3 solid floors of a scene and they can still move inside fine. Anybody knows implementation details along these lines?
So here, I know that there are many implementation possible, but I just wanna ask here that are there some advanced technical details for platformer game that I should be aware of? I am currently asking for 3 things:
How should solid collision of platformer game be handled efficiently? Can we take lesser time to check whether a character has ran and completely fell off a platform?
Slope programming. At first, I was thinking of physics engine, but I think it might be overkill. But in here, I see that slopes are pretty much another types of floor that "push" or "pull" the character to different elevation. Or should it be programmed differently?
Solid objects handling for special cases. There might be a time where the player can slip into the solid objects either via legal game rules or glitches, but all in all, it is always a bad idea to push the player to some random direction if he is in a wall.
For a small number of objects, doing an all-pairs collision detection check at each time step is fine. Once you get more than a couple hundred objects, you may want to start considering a more efficient method. One way is to use a binary space partitioning (BSP) to only check against nearby objects. Collision detection is a very well researched topics and there are a plethora of resources describing various optimizations.
Indeed, a physics engine is likely overkill for this task. Generally speaking, you can associate with each moving character a "ground" on which he is standing. Then whenever he moves, you simply make him move along the axis of the ground.
Slipping into objects is almost always a bad idea. Try to avoid it if possible.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm having to design & develop UI for a Point of Sale (POS) system.
There are obvious features that need to be included, like product selection & quantity, payment method, tender amount, user login (as many users will use one terminal), etc.
My question is related more towards the UI design aspect of developing this system.
How should UI features/controls be positioned, sized?
Is there a preferred layout?
Are their colours I should be avoiding?
If you know of any resources to guide me, that would also help.
The reason this is critical to me as I am aware of the pressurized environment in which POS systems are used & I want to make the process as (i) quick, (ii) simple to use and (iii) result driven as possible for the user to service customers.
All answers, info & suggestions welcome.
Thanks.
P.s. If you could mention the "playoff" between controls that would also be appreciated (i.e. if touch screen a keypad control is provided, but if also supporting keyboard & mouse input how do you manage the keypad & UI space effectively?)
A couple of thoughts from a couple of projects I've worked with:
For the touch screen ensure that each button can be pressed by someone with "fat fingers" as easily as smaller ones (some layouts encourage the use of thumbs at particular locations). Also highlight each button when it is pressed (with a slow-ish fade if you have spare CPU cycles).
Bigger grids are better than smaller ones. The numeric pad should always be in the same place (often the bottom right). The Enter/Tender/etc. "transaction" keys should be bigger than the individual numeric keys - (1) make it more obvious where it is, (2) it will be pressed more often than other screen areas and will wear out (a bigger area will last longer on average; this was more important with older style touch screens; newer technology is more resilient).
Allow functions/SKUs to be reassigned to different grid positions; the layout that works well for one store will likely be wrong for a slightly different one.
Group related functions by colour, but use excellent contrasts. Make sure that the fore/back combination looks good at all angles (some LCDs "bleed" colours left-to-right and/or top-to-bottom angles).
Positive touch screen feedback with sounds needs to have configurable volume and sound sets. Muted tones might be better in an quieter upmarket store, but "perky" sounds are better in a clothing store with louder background music/noise, etc.
Allow the grid size to be specified in percentages or "grid-block units" instead of pixels and draw everything with vectors, etc. since some hardware combinations may have LCDs with better resolution. (One system I worked on was originally specified as 640x480 but shipped at 1280x1024, so my design pre-planning saved a lot of rework later.)
And of course look at the ready-made solutions first (especially if you can get demo software/hardware for evaluation). Although they can be expensive they've often implemented a lot of things that'll you'll have to work through later, and may be cheaper in the long run, even after creating custom add-ons for your system.
Also:
Our UI supported a normal keyboard/mouse combo too (the touchable buttons were just standard button controls sized appropriately). If you pressed a number key it would trigger the same event as clicking the screen-pad button; other hotkeys were mapped to often used button commands (Enter, etc).
If run on a non-POS desktop (e.g. backoffice) the window could be resized too (the "POS desktop" maintained the same aspect ratio, adding dead space at the sides if needed). A standard top menu was available for additional administrative tasks, reporting, etc.
The design allowed everyone to build and test the UI before the associated hardware was finalized. And standard UI testing tools would work too.
Even More:
Our barcode scanners were serial/USB rather than keyboard-like, so each packet from the device raised a comms event. The selected "scanner type" driver class used the most secure formatting that the device could give us - some can supply prefix, suffix and/or checksum characters if programmed correctly - and then stripped this before handing the code up to the application.
The system made a "bzzzt" noise when the barcode couldn't be used (e.g. while cash drawer is open).
This design also avoided the need to set the keyboard focus to a specific entry area.
A tip: if the user is manually entering a barcode via the keypad, and hasn't completed it by pressing Enter, and then attempts to scan another barcode, it should beep instead, so the user can accept or cancel the pending item first.
Aggregated POS Design Guidelines
Based on the above and other literature, here is my list of guidelines for POS design.
[it would be nice if we grew this list further]
User Performance Priorities (in order):
efficiency (least time to transaction conclusion)
effectiveness (accurate info & output)
user satisfaction (based on first 2 in work context)
learning time (reduce time to learn system by making it simple)
GUIDELINES
Flexible Transaction Building - don't force a sequence to transaction wher possible. Place product orders in any order & allow them to be changed to a point.
Optimise Transaction Rate - allow a user to complete a transaction as quickly as possible (least clicks are not really the issue as more clicks could mean larger value of transaction, which makes business sense)
Support Handedness / Dexterity - most users have a dominant hand and a weaker hand in terms of dexterity. Allow the UI to be customised (on a single click) for handedness. my example: a L->R / R->L toggle button which moves easy features like "OK", "Cancel" in nearer proximity to weaker hand.
Constant Feedback - provide snapshot feedback which describes current state of the transaction and calculated result of transaction (NB: accounts) before & after committing a transaction.
Control "Volume" - control volume refers to the colour saturation/contrast, prominence of positioning and size of a control. Design more frequently used controls to have larger "volumes" relative to less frequently used controls. e.g. "Pay" button larger than "cancel" button. E.g. High contrast & greater colour saturation increase volume.
Target Findability - finding & selecting targets (item, numeric key) is key to efficiency. Group related controls (close proximity), place controls on screen edges (screen edge traps pointer), emphasise control amplitude (this dimension emphasises users normal plain of motion) and colour coding make finding & selecting targets more efficient.
Avoid Clutter - too many options limits control volume and reduces findability.
Use Plain Text - avoid abbreviations as much as possible (only use standard abbreviations e.g. size: S, M, L, etc.). This is especially true for product lookup.
Product Lookup - support shortcuts for regular orders (i.e. burger meal), categorised browsing & item name search (least ordered items). Consider include a special item: this is any item where the user types what is wanted (i.e. specific whiskey order) - this requires pricing though.
Avoid User Burden - the user should be able to read answers to customer questions from the UI. So provide regularly requested/prioritised feedback for transaction (i.e. customer asks: "what will be the the outstanding balance on my account if I buy this item?" It should appear in UI already)
Conversational Ordering - customer drives the ordering not the system. So allows item selection to be non-sequential.
Objective Focused - the purpose of POS is to conclude the transaction from a business perspective. Always make transaction conclusion possible immediately with "Pay" button. If clicked, any incomplete items will be un-done: user then read order back before requesting cash/credit card)
Personas - there are different categories (personas) of users of POS systems like (i) Clerk/Cashier and (ii) Manager. The UI should present the relevant options to that logged-in persona according to these guidelines i.e. Cashier: large volume on transaction building controls; Manager: large volume on transaction/user management controls.
Touch Screens - (i) allow for touch input with generally larger controls to supported a large finger tip as pointer. (ii) Provide proprioceptive feedback - this is feedback that indicate the control pushed (it should have a short delay on it fade: user finger will be in the way initially). (iii) Auditory Feedback (optional) - this helps with feedback especially with regards errors in pressurised environment.
User Training - users must be trained to understand business protocol & how the POS supports that protocol. They are the one's driving the system. Also, speak to POS users to design & enhance your system - again they are experienced users of the POS system
Context Analysis - a thorough analysis of the context of use for your POS system should be performed to best implement the POS heuristics mentioned above effectively. Understanding the user (human factors), the tasks (frequency, duration, stress factors, etc.) and environment (lighting, hardware, space layout, etc.) should be comprehensively conducted during design and should not be assumed. Get your hands dirty & get into the users work space!! That way you can develop something your specific users can use effectively, efficiently and satisfactorily
I hope this helps everyone.
To all respondents, I really appreciate your feedback! Please give me more wrt to this answer. Thanks
I ran across this question, and I thought I'd add my two cents since some of my work has been mentioned here.
I agree with most of what's been said, but it's important to remember that most everything mentioned represents heuristics. That means that while they're good principles to follow, there are likely times when (a) specific rules should be broken, and (b) there will be contradictions between rules. The trick is being able to weigh conflicting principles and apply them to the appropriate degrees (as you noted in a previous comment).
In the end, it's a matter of balancing the business requirements and user needs in a way that produces optimal results. And in the real world, I find that this can never be achieved through heuristics alone.
Here's an example: I recently finished POS designs for Subway, Wendy's, and Starbucks (see Case Studies at POSDesigns.com). All of these designs used solid heuristics, but all of them came out very, very different because of differences in the business goals and requirements, the users' needs and background, the environment in which they work, the technology being used, and a whole host of other differences.
You can never create a great design in a vacuum. For each of the clients mentioned above, I traveled around to a lot of different types of stores in multiple countries to get a feel for how the users' worked, how the systems would be used, how customers ordered, etc. All this information - along with sales and other data provided by the company - was invaluable in creating a highly usable solution.
Here's another example: Guideline #3 you provide previously ("Support Handedness / Dexterity") is fine as a heuristic (though I have to say that I question the conclusion of swapping simply OK/Cancel). But in visiting Subway stores, we discovered that in that context, the location of the register actually plays a greater role in the hand employees prefer.
In other words, registers that were squished against a wall on the right side tended to produce left-handed users, even when the users were right-handed for every other task. This had implications for the way we allowed the UI to be flip-flopped...and who had control of it. There are tons of examples like that, but we never could have achieved the gains that the user interfaces have produced - like 90% reduction in voids, near zero training, increased speed, accuracy, and check sizes, etc. - by following heuristics alone.
One more point (sorry...you've got me going now :-). Many times heuristics are incomplete without more data as to how to apply them. Consider your guideline #11, "Conversational Ordering". There's much more to this guideline than just providing flexibility in order entry. For instance, one of the many things you have to consider is that not all paths should be presented as equally probable.
We analyzed the way Starbucks' customers ordered in various locations across the United States and United Kingdom. Then, we optimized the system for the most commonly spoken patterns. If we had allowed all paths to have the same "volume", we would have sacrificed usability in other areas, since the design would have appeared more cluttered. The new POS system now supports almost all possible order patterns, but the most probable paths are presented at a higher "volume" than those that are less probable.
OK, it turned out to be more than two cents, but the bottom line is this: If you have a chance to visit the environments in which your POS will be used, analyze customer/employee interactions, etc. ...you should take it. Contextual observations and analysis are invaluable in correctly applying heuristics to your situation.
Good luck!
Dr. Kevin Scoresby
FYI - I'd enjoy talking further about this if you or anyone else in the group would like. My office phone number is on my "About Us" page at POSDesigns.com, or you can use the form to initiate an email conversation. Feel free to call anytime during business hours U.S., East Coast Time.
Devstuff already provides some great answers. In addition:
Create a prototype design (can be simply color-printed on paper) and test this in a scenario that is as realistic as possible, i.e. in a store, with a real future user. Enact a few common scenarios and ask the user to really 'use' your prototype as he / she would use the final product. Obtain feedback through interview and observation.
One way to evaluate your design is to check if you have applied the CRAP principles of design. This article discusses how this can relate to user interface design.
In addition to what has already been posted, here are some tips we picked up along the way.
We use two distinct UI's, one for touch-screen with large bold buttons and one for mouse/keyboard entry. the code behind them is the same just the layout is different.
For touch screens
Try not to have pop-up messages that take focus away from the main form, as users may not be looking at the screen, for example if they are chatting with the customer. we found that if this happen users will continues scanning products unaware that they are not been entered into the sale.
If using a bar code scanner be aware that they sometimes send an enter key after the bar code, that will active focused controls (saying yes/no to pop-ups). To help prevent this we disable the enter key-press on buttons, so only a mouse/finger press will fire the click event. we also turn tab stop to false (may be called different in you language), to stop controls that are touch only from getting focus.
As far as colours go we try to stick to bold button and font colours that can easily be distinguished/read in poorly lit rooms and on screens with glare, as most times users are not in the position to move the screen should they have problem reading it.
Anything you can do to speed up/ help the user is a good thing, for example on our payment screen, as well as having 0..9 keys for payment entry, we also have £1,£2,£5,£10 etc so users don't have to add up the money they are given, they can just press the key for each coin/note they received from the customer.
The best tip I can give is to remember that you are designing for a completely different environment form a desktop application, that would be used in an office. and that users may of never used a computer before. since POS systems are usually locked down, try to make it as easy to use out of the box as possible.
another thing to consider is personas (as introduced in cooper's "the inmates are running the asylum").
essentially, you make up a few canonical "users". give them names, hobbies, skills, a picture, and use them as the people you are designing for.
ie:
billy the cashier: has some computer experience (playing on his ps2). he's in high school, may go to community college. he's a primary user of the system, and wants to be able to learn the new system quickly.
cyrus the manager: needs to manage the cashiers. needs a way, with only his authorization, void transactions and be able to review logs of the sales for making reports as well as managing "shrinkage" (theft). he has 2 kids, lives out in the suburbs so has a 45 minute commute; therefore he doesn't want to spend extra time wrangling the system.
you may need three or four personas; any more than that and it becomes hard to design for.
I highly recommend the book "inmates are running the asylum", plus cooper wrote another book: "about face"; which I have yet to read.
good luck!
I would recommend doing some sort of usability survey amongst your current user group. There is no need for this to be a complicated or highly scientific survey. Present them with simple questions to determine:
The users priority when using the system (accuracy of task, speed, aesthetics)
Preferred input devices
Work flow through such a system
Level of education and domain knowledge
I have found that a lot can be learnt from a simple survey like this and can be applied to your UI design to ensure that the users usability experience is satisfactory.
Great comments from everyone else. I'll just add that there's also an article by Dr. Kevin Scoresby titled "How to Design a (POS) System that Everybody Hates" that discusses usability of POS systems and adds a few points to what people have already mentioned, such as:
Don't punish the employee for customer choices
Avoid creating conflicts with the real world
Avoid color coding (1 out of 10 males has a form of color blindness)
I've also discovered lots of helpful POS design tips at POSDesigns.com. One thing I found interesting is that by focusing too much on the number of button presses, you can actually impact speed--which is often a primary goal. There's also a tip titled "Five Factors that Influence Speed" that I found helpful.
Good luck!
Kyle
There are already some really good systems out there i.e. Tabtill for Win 8 http://www.tabtill.com or Shopkeep for iOS http://www.shopkeep.com. The fewest number of clicks your user need to do the better. As I am also involved in coding for such solutions and having worked with clients using various POS systems, some can be really frustrating. Remember watching cashiers in a bar tapping 10 times just to take payment for a couple of items, their fingers are hopelessly hovering over the screen trying to find the right colourful button. Keep it simple! Allow sorting of your visible product range, categorize them or use barcode reader. Keep at least 5% gap between buttons and don't let silly animations slow down your UI. Either invent your own or just copy what is already out there with your own twist.
Disclaimer: I'm not actually trying to make one I'm just curious as to how it could be done.
When I say "Most Accurate" I include the basics
wall
distance
light levels
and the more complicated
Dust in Atmosphere
rain, sleet, snow
clouds
vegetation
smoke
fire
If I were to want to program this, what resources should I look into and what things should I watch out for?
Also, are there any relevant books on the theory behind line of sight including all these variables?
I personally don't know too much about this topic but a quick couple of Google searches turns up some formal papers that contain some very relevant information:
http://www.tecgraf.puc-rio.br/publications/artigo_1999_efficient_lineofsight_algorithms.pdf - Provides a detailed description of two different methods of efficiently performing an LOS calculation, along with issues involved
http://www.agc.army.mil/operations/programs/LOS/LOS%20Compendium.doc - This one aims to maintain "a current list of unique LOS algorithms"; it has a section listing quite a few and describing them in detail with a focus on military applications.
Hope this helps!
Typically, one represents the world as a set of volumes of space held in some kind of space partitioning data structure, then intersects the ray representing your "line of sight" with that structure to find the set of objects it hits; these are then walked in order from ray origin to determine the overall result. Reflective objects cause further rays to be fired, opaque objects stop the walk and semitransparent objects partially contribute to the result.
You might like to read up on ray tracing; there is a great body of literature on the subject and well-understood ways of solving what are basically the same problems you list exist.
The obvious question is do you really want the most accurate, and why?
I've worked on games that depended on line of sight and you really need to think clearly about what kind of line of sight you want.
First, can the AI see any part of your body? Or are you talking about "eye to eye" LOS?
Second, if the player's camera view is not his avatar's eye view, the player will not perceive your highly accurate LOS as highly accurate. At which point inaccuracies are fine.
I'm not trying to dissuade you, but remember that player experience is #1, and that might mean not having the best LOS.
A good friend of mine has done the AI for a long=-running series of popular console games. He often tells a story about how the AIs are most interesting (and fun) in the first game, because they stumble into you rather than see you from afar. Now, he has great LOS and spends his time trying to dumb them down to make them as fun as they were in the first game.
So why are you doing this? Does the game need it? Or do you just want the challenge?
There is no "one algorithm" for these since the inputs are not well defined.
If you treat Dust-In-Atmosphere as a constant value then there is an algorithm that can take it into account, but the fact is that dust levels will vary from point to point, and thus the algorithm you want needs to be aware of how your dust-data is structured.
The most used algorithm in todays ray-tracers is just incremental ray-marching, which is by definition not correct, but it does approximate the Ultimate Answer to a fair degree.
Even if you managed to incorporate all these properties into a single master-algorithm, you'd still have to somehow deal with how different people perceive the same setting. Some people are near-sighted, some far-sighted. Then there's the colour-blind. Not to mention that Dust-In-Atmosphere levels also affect tear-glands, which in turn affects visibility. And then there's the whole dichotomy between what people are actually seeying and what they think they are seeying...
There are far too many variables here to aim for a unified solution. Treat your environment as a voxelated space and shoot your rays through it. I suspect that's the only solution you'll be able to complete within a single lifetime...