I'm involved in writing a touchscreen application for a medical device. The program is kiosk-like, in that the start menu, etc, will not be accessible to the user, and the user will use an onscreen keyboard to type any text in the rare event that they need to. The spec'd screen size is 1280x1024.
The question is this: What's the minimum touchable button size for a reasonable interface? I'm thinking that an American dime is a reasonable minimum size in all directions, with the reasoning being that a dime is about as small as we can expect people to feel with their fingers (it's got a diameter of 17.91 mm, according to the almighty Wikipedia).
Or is a dime a bit on the large size?
EDIT: Some extra knowledge about our users. They have to have both hands, because of the nature of the device, and they will not be wearing gloves. They would have to be able to manipulate film cassettes of up to 14x17 inches in size (again, due to the nature of the device), so I feel reasonably confident that they have some manual dexterity.
There's a lot of factors that go into this that can require the buttons to be a lot larger.
Based on my experience, I would use something no smaller than a US Quarter--a dime is really too small for repeated use. If users will wear gloves or other things to "fatten" the finger, you will need even larger buttons. If you have users who are disabled or have poor motor control (more common than you would expect), a dime sized button is absolutely useless. Keep in mind that some touch screens are not particuarly accurate. I've worked with some that were up to 25 pixels off near the edges.
Also, how often are they pushing the buttons and how many buttons will be on the screen? Having to hit many small touch screen buttons in sequence will drive users crazy. That's among the reasons you'll notice that many touch screen systems such as ATM's are starting to have buttons that are in excess of 1 inch x 2 inch.
Also, how fault tolerant is the system if they accidently push the wrong button? Mabye because the screen was off or they were viewing from an angle? The less fault-tolerant it is, the bigger your buttons need to be. Mistakes happen. Smaller buttons means more mistakes.
That's going to depend a lot on the device itself as well as the UI that you're putting on it.
Many touchscreens (excluding multitouch) average the area touched, so chances are you don't want to put UI items any closer than fingertip area to cut down on mis-touches.
Touch screens need to be calibrated post installation, and even then there might be some parallax depending on the screen type or the natural angle from which the screen is being viewed.
You might also consider visual feedback on touch down and only let the event through on touch release. When you get a touch, display an x larger than a fingertip centered on the point of contact and let it track until the touch is released. That point is the click point.
Above all, user test, user test, user test.
If you're really worried about size - make it fluid so you can change it easily and test it on real users under real circumstances. Did I mention user testing?
Depends on so many factors...
All touchscreens degrade over time, and give less accurate, more noisy, and less linear results over time. Resistive touchscreens have vastly improved over time, but you'll still have issues, especially with such a large touchscreen.
Further, people have different sized fingers, and different levels of hand/eye coordination.
Lastly, the hardware and software that actually processes the touches before it even gets into your application needs to be calibrated and characterized.
So it's really much more difficult than merely asking how big the buttons need to be.
First, I'd talk to the manufacturer of the touchscreen itself, and the manufacturer of the hardware/software interface and find out their recommendations.
Second, I'd do some tests - make a few targets onscreen, and then record all the points that the touchscreen hardware sends you when you press them. Do a bunch of tests for quick jabbing presses, hard long presses, light presses, etc. See how much the input jumps around (you might be surprised...) and how fast it actually responds (heavy filtering might make the presses come late, which could reduce usability without speedy feedback).
Thirdly, I'd design the user interface for GREAT user feedback. Make the buttons wider than the finger so that when the user presses it, they still see a little of the button, and it appears to depress when the press it.
Fourthly, I'd consider adding adaptive learning. You'll have to have calibration run and rerun occasionally, but in-between you can adjust to the user slightly to decrease the likelihood of errors. It's not for the faint-hearted, though, and is easy to do wrong so take care if you want to consider it.
-Adam
I would agree somewhat with James that a US Quarter is a better target, but often you can change that so it is about a US Quarter in one dimension, and a US Dime in the other. Of course, you want to keep the center of active UI elements separated from each other by at least the distance of a US Dime, depending on how "solvable" clicking on the wrong element is. (Clicking on the wrong element in a list is easily solvable if all that is being done is selecting it; clicking on the exit button of a dialog isn't so much.)
Be sure to make your most common cases really easy to do. This usually means making a really big button; or, sometimes, making a whole UI area "clickable" (but if you do that, it's often nicer to the user to also see some sort of button in the area to imply they can click on it.)
Depending on the quality of the touchscreen, you can often "cheat" around the edges. Not only are the edges and corners easiest to hit on a traditional mouse-based interface, but they can be for touchscreen-based interfaces as well, since the user can press their finger right up against the framing element and not worry about triggering something else. Make sure your UI elements accept clicks all the way to the edge of the screen if they are near the edge, as well. Using this, you could have a US dime-sized button in each corner that would still be very repeatably clickable (although possibly lacking feedback if your users have large fingers).
Bigger is better, as we know size matters.
It is important for users of touchscreen interface to get feedback when click somewhere on screen and because this cannot be mechanical as on normal keyboard or mouse button we use sound and/or changing color.
Playing sound only when control is clicked instead of each touch anywhere in screen to achieves good usability.
If working environment is noisy you must give color feedback too, and in this case button should be twice or triple bigger than finger. It is important that user can see feedback when his hand is on screen, and you know that hands are not from glass.
Ordering on screen is important too. For mouse driven interface using of drop downs combobxes, menus and etc is good, but for touch screen you will kill usability with them. So instead of placing controls in upper left part of screen put them in bottom right side. This will give your users free of hands information area on screen when they works.
Think about left/right hand layout, and day/night color scheme too.
Currently i am working to create set of .net UI components for touch screen and you can take a look for demos soon on my web site.
Atanas
Related
I have a personal project designed for the desktop that I previously created in Adobe XD, and now I would like to put it on Behance. To do so, I need to adapt the layout, designed for the desktop, to mobile.
I don't usually design for smaller screens, so I am wondering how much I need to decrease text and element sizes? For example, if I have a text with a font size of 40px, what calculations should I use to decrease the size for mobile? Is there a default percentage to reduce desktop values? Alternatively, are there visual rules that other designers follow?
I always design for Bootstrap, but I'm not sure if I am thinking about mobile the right way.
I've also posted this on the User Experience Stack Exchange forum, but I'm not sure which one is the best for my question.
Thank you for sharing your thoughts and advice.
I have designed mostly for desktops as a traditional web designer, and now I'm trying to migrate to UI/UX.
Modern devices do most of the scale conversion work for you by adequately scaling the viewport to compensate for the smaller screens and often higher resolutions. Depending on the type of application you are designing, the technology is different, but the result is very similar.
For example, if you were implementing the design for the Web, you would likely need to use browser features like media queries to manage your content.
However, because you are focusing on the design of the site, you should not need to worry about the 'how', so you can focus on what to do.
Here are some tips:
Elements and text appear roughly the same size on desktop and mobile if you hold the device at a casual but comfortable distance and compare it to the size it appears on your desktop's screen at an average viewing distance. You can try this by going to a website built for mobile like Apple's.
Because of the similar size but reduced screen dimensions, you need to simplify your design, avoid multiple columns (especially for phones).
Because you see a smaller portion of your design at once on mobile, there is less need for significant visual hierarchy. For example, if you have multiple heading levels with a significant visual size difference on the desktop, you can probably get away with making them closer in size on mobile.
If you want to see what your design looks like on mobile, try emailing the design to your phone, save it to your pictures, and load the image full screen. You may need to zoom the image in a bit so that the left and right of the design are touching the sides of your phone's screen. If your text looks too small or your elements are too large, adjust the design and load it on your phone again. Keep doing this until you get it right.
With a little practice and effort, you will get the hang of Mobile design. And, if you want to take it to the next level, try researching mobile first design. Here is just one of many articles on the subject.
I'm a CS scrub and I thought I understood everything until now. I'm being prompted to make anything I want, so I'm choosing to make a little game I've been thinking about for a few months. The problem is that I don't know where to start. We've been using JavaFX and we've done some animation but I don't fully understand everything. I don't really understand mouse events but the idea I have depends on them. Anyway, here's the idea:
The main (2D) game is about reducing fractions.
Imagine the window being split horizontally into top and bottom.
Now, imagine boxes spawning out of view and moving into view toward the horizontal center line. The boxes will only be moving vertically and each box has a random integer. When a box gets to the center, it'll stay there and allow for other boxes to land on top of it (or below it if it came from the bottom).
Boxes are eliminated by "reducing" a number in the numerator with a number in the denominator. Several boxes may be selected on one side before reducing them with the other side. Here's a picture that might help convey what I want to do:
Crude Three Frames of the Gameplay Drawn In Paint
Hopefully that all makes sense.
I've been trying to use an extension of Rectangle but I don't really know what else to do. I figure I'll need to create some ArrayLists to keep track of the boxes and some other lists to keep track of factors as well. Anyway, any help would be fantastic. Thank you guys very much!
My app needs to show several buttons, without overlap, and preferably without scrolling or zooming. They must be big enough to poke with a finger and read the text. Button width depends on its text length, and the height is constant. The screen size is known.
Each button represents a food about which I know some nutritional information. I'll calculate a protein:carb ratio and a fat content, both ranging from 0% to 100%.
I want to put the buttons close to a position that reflects their nutritional content: e.g. protein-rich at the top, carby at the bottom, fatty on the right and lean on the left. So cake would be bottom right and meats would be somewhere on the top edge.
Often, there'll be overlap and I'll have to nudge them away from each other.
The puzzle is to invent an algorithm for that nudging. The desiderata in order of priority are:
1) Readable and pokeable size, no overlap.
2) No scrolling or zooming required, although it'll happen when there are so many buttons that they could never fit on the screen even if we didn't care where they were.
3) Buttons should be close to where the user would look based on knowing the nutritional content of the food.
Incidentally, I'm using JS on a smartphone, not prolog or the like.
(There are some seeming dupes, but no solutions. One is about diagonal stalks, another just advocates throwing it at a game engine, but most are devoid of answers.)
Ther MArVL group at Monash University does work on constraint-based layout work. Some of their software might be applicable to your problem.
I am fairly new to Mac and iOS programming, and have recently decided to get really serious about developing applications for both platforms. The first step I took was to register to the Mac and iOS developer programs, download Xcode and study a book about Objective-C. I spent the last 6 weeks familiarizing myself with Objective-C, its syntax, concepts and the Foundation framework, and was purely developing command line applications for those purposes.
Now, the next step seems to be Cocoa and developing applications that offer graphical UIs, which I'm now looking into. Now, here's an issue I am having with this: as I am completely blind, I cannot visually see the screen. Thus, I use VoiceOver, the screen reader built into OS X and iOS. Maybe some of you devs have heard of VoiceOver at some point, as Apple has specified a variety of accessibility guidelines that concern VoiceOver. On that note, thanks a lot to all of you who abide by those guidelines, your effort is greatly appreciated!!! :-)
As for Xcode, it actually works really well with VoiceOver (VO). Adding in new UI elements is also no big deal, I can just copy them from the library and paste them into the view. However, I cannot drag them around and set them up in an appealing fashion, or at least I haven't found any way of doing it just yet as I'm just starting out!
Now, I really would like to know if there is any 'textual' way of arranging UI elements. I know the inspector has a great variety of options, but I'm not sure yet if any of them would let me, say, change the coordinates of an UI element by hand. Also, I've read about the new constraints which help create a consistent layout, but at this point I'm not really familiar with how they can be used or if they would be at all helpful in my case.
Also, I do realize that producing an interface that is 100% appealing to a sighted end user may not be possible for me, as it's hard for me to decide on color selections or to design a logo. Thus, I would probably need to hire someone for those things. However, if it would be possible for me to just specify the layout roughly, that would already help me a lot!
Thanks for any ideas / suggestions :-)
Robin
I'm not a blind user myself but I've worked with VoiceOver so hopefully some of this will help. I'm making this a community answer so feel free to add tips from your own experience if you've worked with UI layout in Xcode with VoiceOver.
When editing UI in Xcode there is an Inspector where you can change the size and position of the views. The shortcut to get to the Size Inspector is Alt+Command+5. The same shortcut works with VoiceOver.
You said that you were coding for both iOS and OS X. On iOS the y-axis starts in the top and points downward (so a higher y value means that the view is lower on the screen). This means that x and y specify the upper left corner of the view and the width and height extends to the right and down from there.
On OS X it's the opposite. The y-axis start at the bottom and a higher y value means that the view is higher on the screen. In both cases the x-axis goes from left to right. This means that x and y specifies the lower left corner and the width and height extends to the right an up.
Further, each view is positioned relative to their parent instead of absolute coordinates. This means that if you position a view at x=10 and y=30 and then position another view inside that at x=5 and y=10 it will have an absolute position on the screen that is x=15 and y=40.
If you can picture the layout in your head then you should be able to do it like this but it may still be hard to do.
Update
At the top of the hierarchy these coordinates relate to the size of the window. On iOS you have fixed sizes (320×480 for the iPhone and 1024×768 for the iPad). Depending on if the device is in landscape or portrait one of these is the width and the other is the heigh. You usually subtract 20 pixels from the height to account for the status bar. So the coordinate where y=0 would be directly below the status bar.
On OS X you can change the size of the window yourself. I will try and explain where to find it.
At the top level. Navigate to the "source code group" and interact with it. There you should find a "navigation bar group" and a "table" and a "scroll area". Interact with the table. In that table you should find a list of Placeholders and a list of Objects. One of the Objects will be the "Window". With the Window selected, all the inspectors will change properties of the Window. You can quickly jump to the Size Inspector by changing to any inspector and then back to the Size Inspector. For exaple type Alt+Command+4 and then Alt+Command+5. The first two elements in the Size Inspector should be the Width and Height of the Window.
This discussion will inevitably go off-topic, but here are my ideas:
Consider specializing in something else. There are so many things a programmer can work on. Many, if not most, programmers can enjoy long, successful careers without ever having to arrange any UI elements. Frankly, I don't think that this is particularly exciting, unless you have more of a designer mindset.
Alternatively, whatever tool you are using to arrange the elements, this layout is probably saved to some file, probably in some XML-like fashion. Find this file and edit it.
If you want to get really good at arranging UI elements, consider first hiring several people to do multiple arrangements for you. Then analyze the numbers and come up with some kind of formula that will be a good heuristic for such an arrangement. Or perhaps somebody has already come up with this kind of formula.
I am trying to make an arcade machine. The user will purchase credits, which will allow him to play for X minutes. I want to write "9:42 minutes left" at the left corner of the screen, even if he's playing a full screen game (UrbanTerror, for example).
I would really like if I could do this with Ruby, but any other language is OK. Any ideas?
Thanks in advance.
A good example of such an application is XOSD.
Problem is, that will probably fail over any GLX context, which is what fullscreen games like Urban Terror work with. Even if it would draw, the game will overdraw it almost instantly, so the best thing you would get is heavy flicker.
Probably you are better off with a cheap hardware solution, like a small secondary display (there are some USB 7" displays out there) or a LCD device. I would even claim that's good for usability.
Perhaps this is of help for you, but I don't know whether it works for several applications and fullscreen mode applications:
http://doc.trolltech.com/3.3/opengl-x11-overlays.html
The idea is to use a special overlay capability of the graphics card, which is typically used for popup windows. Perhaps you can create such an overlay at the topmost level and it will also work in fullscreen -- perhaps not.