Find out which keyboard layout is used, using ruby - ruby

How can I find out, which keyboard layout the user of my ruby application is using?
My aim is to have a game, where you can move the player on a map. To go one step down and one step left you press "Y" on a german keyboard. On an American keyboard, you would press "Z". We optimized the game for windows and mac, so I would like a solution for both platforms (and we don't use any command/shift/control-keys).

For Windows, you probably have to use the Windows API GetKeyboardLayout(), unless Ruby provides a wrapper for that.
There are a lot of useful I18n resources for Windows on the MSDN web site.
It might be easier to simply allow them to configure it themselves as a preference if you don't have a good portable way of detecting it.

I think it'll be much easier and naturally to allow users to define keys themselves.

As Alexander recommended, let the user define the keys themselves.
But, if you really want to recognize the layout, you could always ask the user to press keys in certain positions, particular to some layouts.
"Press the second key to the left of the return key. If your return key is two rows high, press the lower one"
[presses ä]
"Looks like you have a scandinavian keyboard"
That, however, is a horrible cludge, and, in the game context, would recommend the custom keymapping method

Related

Are Keyboard shortcuts mandatory for 508 compliance

I researched a lot on this and seem to be getting conflicting answers on SO and all of the web. I understand that with Section 508 that compliance DOES NOT equal accessibility.
Biggest thing is that the UI/UX designer is being told that keyboard shortcuts for the dropdown menu NEEDS to have keyboard shortcuts to be 508 compliant. I see Windows Forms applications having this, but for web development I do not think that is mandatory to be "compliant"
My other question that was answered is here: MVC 4 site 508 compliant
I partially agree with thinice, but agree with the first two sentences of the comment left.
The sentences I am referring to are:
They should be -reachable- by keyboard for 508. I'm maintaining emphasis on the difference between a shortcut and being reachable
Crixus said:
Biggest thing is that the UI/UX designer is being told that keyboard shortcuts for the dropdown menu NEEDS to have keyboard shortcuts to be 508 compliant.
You need to clarify this. Do you mean a simple <select> or a drop down for a navigation menu? As Thinice stated in comments, Section 508 just says needs to be reachable. The question becomes:
how are you adding shortcut keys to your application? Are you adding them via the accesskeys attribute or how Gmail/Yahoo Mail adds shortcut keys?
I thought I did an answer about AccessKeys, but cannot find it. Essentially accesskeys sounds like a great thing, but if you look at the keys you are allowed to use that do not interfere with either browser or Assistive Technology keys, you are quite limited. Gez Lemon did an overview of AccessKeys, and their issues. If you want to do the Yahoo!Mail approach, you have to do a bit more work. Todd Kloots made a presentation about ARIA, which may be helpful. Which leads me into the second part. If you are using JavaScript heavily on a site to do stuff, people use both 1194.21 (software application/OS) and 1194.22 (web) standards to evaluate a site. If the site uses JS to make a navmenu (YUI menu example), the drop down behavior needs to be reachable by keyboard. I would say this falls under:
§ 1194.21 Software applications and operating systems.
(a) When software is designed to run on a system that has a keyboard, product functions shall be executable from a keyboard where the function itself or the result of performing a function can be discerned textually.
AND
(c) A well-defined on-screen indication of the current focus shall be provided that moves among interactive interface elements as the input focus changes. The focus shall be programmatically exposed so that assistive technology can track focus and focus changes.
I say both standards are used because (a) says you have to be able to get into the navigation area via the keyboard. (c) comes into play because some menus you can tab to all of the parent items, but you cannot get into the drop down part without a mouse. I have seen menus that you can tab to the sub-menu items, but the menu does not pop open. So if you just use the keyboard (mobility imparments), versus using JAWS, you will have no idea where you are.
I see Windows Forms applications having this, but for web development I do not think that is mandatory to be "compliant"
I would say actual applications, like Word, Outlook, etc., supply shortcuts to frequently used commands. If you are doing this for a web application, I would think about how many you do. This is not a mandatory piece to be compliant. If you are making like a navigation bar, I would recommend using ARIA roles, specifically role="navigation", on the parent element as a best practise.
The problem with some standards (as well as many laws) are that they're open to interpretation...
The only mention I can find in the 508 standards that mentions keyboard use is this (verbatim):
Subpart B -- Technical Standards
§ 1194.21 Software applications and operating systems.
(a) When software is designed to run on a system that has a keyboard, product functions shall be executable from a keyboard where
the function itself or the result of performing a function can be
discerned textually.
My spin on this is:
A keyboard shortcut for navigation options may be impractical given the amount of operations/features a given section may contain. It is important that they're reachable -somehow- via keyboard.
From a UX standpoint, key features should have shortcuts "just because" it's good UX practice. But to shortcut everything goes from one ditch into the other.
508 != accessibility, but if you work for a gov/edu, chances are it's in your PD to be compliant.
Another end of the spectrum is the WCAG which is pretty much coupled with 508 compliance, and in my book better defined: Keyboard stuff is under 'operable' in WCAG.
In a nutshell:
It's good practice for UX to have custom keyboard shortcuts for important features. But has no bearing on 508 compliance by itself. (With exception that functionality should be reachable by keyboard -somehow-).
There are levels of 508 compliance, if you're talking about a government project. Some departments assign 508 scores to their developers, and it factors into your score for future contracts. 508 Compliance only requires that everything is reachable by keyboard, which is usually true, in a way. Screen readers will read everything that's not hidden, and tab keys will take people through links. But if you want a good score, you must address the intent and not only the letter of the law.
Edit: Screen readers will read some hidden elements. One method is to absolutely position an item above the screen with a negative top position. Another is to use the clip property.
http://adaptivethemes.com/using-css-clip-as-an-accessible-method-of-hiding-content/
But if you're using display:none, heights of zero, and javascript toggles, many screen readers will not speak these items.
In the case of a drop-down, you are actively hiding elements from screen readers etc, so you do have to fix it, because most readers won't hear things with display:none.
You will not find definitive documentation on keyboard navigation. The reason no one will specify exactly what to do, is that there are so many potential conflicts - with the browser, the OS, etc. There are also no standards, although Aria is making progress:
http://www.w3.org/TR/wai-aria-practices/#keyboard
I would not put accessKeys on a menu, if that's what you meant.
Instead see: http://www.w3.org/TR/wai-aria-practices/#aria_ex_widget
I would save actual accessKeys for major things like 'Search' and 'Home'. Adding a learning curve to your site wouldn't help the cause, if you had an accessKey for everything. If you put for example, "About Us" accessKey=A, and you had 20 accessKeys assigned to letters, it would be bad.
I've been doing 508 sites for a long time, and personally, I just don't use drop-downs. It's far simpler to add subpage menus. And I personally hate clicking on dropdowns. Dropdowns require a precision in clicking that just irritates me, and doesn't help with accessibility, because remember accessibility also includes people who don't click very well. Plus, dropdowns are limited in the number of levels you can have, not technically but from a UX view.
What I use:
Tab indexes.
Carefully placed menus so that a user won't get a huge list of links before hearing the basic idea of the site or page.
On some projects, tree menus with matching arrow-key page navigation, sequentially.
Accesskeys H for home and S for search, if needed.
The problem especially is in sorting information. Think how quickly you scan a long list of links, and then imagine sitting there and waiting for it to be read to you. Perhaps, organize your content into digestible pieces & let the search box do the scanning. Depends on the content.
Luck. :)

No, really. What is the proper way to handle keyboard input in a game using Cocoa?

Let's say you're creating a game for Mac OS X. In fact, let's say you're creating Quake, only it's 2011 and you'd prefer to only use modern, non-deprecated frameworks.
You want your game to be notified when the user presses (or releases) a key, any key, on the keyboard. This includes modifer keys, like shift and control. Edited to add: Also, you want to know if the left or right version of a modifier key was pressed.
You also want your game to have a config screen, where the user can inspect and modify the keyboard config. It should contain things like:
Move forward: W
Jump: SPACE
Fire: LCTRL
What do you do? I've been trying to find a good answer to this for a day or so now, but haven't succeeded.
This is what I've came up with:
Subclass NSResponder, implement keyUp: and keyDown:, like in this answer. A problem with this approach, is that keyUp: and keyDown: won't get called when the user presses only a modifier key. To work around that, you can implement flagsChanged:, but that feels like a hack.
Use a Quartz Event Tap. This only works if the app runs as root, or the user has enabled access for assistive devices. Also, modifier key events still do not count as regular key events.
Use the HIToolbox. There is virtually no mention at all of it in the 10.6 developer docs. It appears to be very, very deprecated.
So, what's the proper way to do this? This really feels like a problem that should have a well-known, well-documented solution. It's not like games are incredibly niche.
As others have said, there’s nothing wrong with using -flagsChanged:. There is another option: use the IOKit HID API. You should be using this anyway for joystick/gamepad input, and arguably mouse input; it may or may not be convenient for keyboard input too, depending on what you’re doing.
This looks promising:
+[ NSEvent addLocalMonitorForEventsMatchingMask:handler: ]
Seems to be new in 10.6 and sounds just like what you're looking for. More here:
http://developer.apple.com/library/mac/#documentation/Cocoa/Reference/ApplicationKit/Classes/NSEvent_Class/Reference/Reference.html%23//apple_ref/occ/clm/NSEvent/addLocalMonitorForEventsMatchingMask:handler:

How Imitate a [Ctrl+Left mouse click] on the center of the form or open another program and type in a word?

Babylon dictionary and a couple of other dictionaries allow to click on any word in any windows program
and automatically recognize the word under the cursor, and at once open the dictionary window while searching for that word in installed dictionaries.
You can on the other hand open your dictionary, type in your word and press Enter, the result will be the same.
There's a Delphi form, containing a text label, for example with the word "Automaton".
My question is:
How to send a word from my Delphi application right into the dictionary window, as if you typed it manually and pressed Enter?
The best solution is to send some message through the Windows mechanism, but if it is too complicated, there's another solution, and so the second answer: as I described, we need to model a [Ctrl+left mouse] click on a form where this word is displayed on a form [ a visual label on the screen of my Delphi application], to be exact, on some central pixel of this label.
Could you kindly give an advice how to do one thing or another in Delphi ?
** edit:
The problem with AppActivate is this: Babylon dict has a daemon part that seats in the tray.
In the task manager a real window where the text should be input also is named 'Babylon'.
So AppActivate('Babylon') tries to bring to front the non-visual part of the application.
Do you have any suggestion how to determine the windows handle or something of a real visual part of the application? In the task manager, I repeat both visual and non-visual parts are named 'Babylon'.
I cannot offer an answer so much as some insight and advice...
There are certain applications which "intercept" keyboard and mouse instructions, and essentially "nullify" them if they are being immitated by software. Generally-speaking, you'd only see this in proper AntiVirus software such as Kaspersky by design... however:
The way some (not many, but some) programs hook keyboard and mouse inputs, as a side-effect, behave the same way. If you have attempted all of the advice given as comments above, and cannot get Babylon to trigger an action as a result, it is likely Babylon behaves as I have described.
If what I suspect is true, then the method you are attempting is simply not possible (at least, not using any simple Pascal code on its own... ASM might be able to do it but that's beyond my knowledge).
A better solution may be to do a little research to see if any of the following options are available to you:
1) Does Babylon have a Pipeline or API you can use to interface your application(s) with it?
2) Is the particular functionality you require of Babylon accessible through one (or more) DLL files distributed as part of Babylon?
3) Is there an alternative to using Babylon for your needs?
I know it's not an answer as such (certainly not one you'd want to hear), but it may point you in a better direction.

Is it possible to programmatically turn on the Macbook Pro's keyboard backlight for individual keys?

Although I have a feeling that this isn't technically possible, it's worth asking anyways. Is it possible to turn on the Macbook Pro's keyboard backlights for individual keys? I am working on a piece of grid-based software which allows the user to navigate around by pressing any key on the keyboard to position the cursor at that point in the grid. It would be very cool if I could somehow just turn on the backlight for certain keys to give the user an easy way to see the current position of the cursor.
Is it even possible for an application to control the keyboard backlighting at all, let alone for individual keys?
Yes, on programs controlling the backlight.
iTunes visualizer that pusles keyboard backlighting to music:
http://www.youtube.com/watch?v=LUXLkwlF9e8
How to manually adjust (via plugin):
http://osxdaily.com/2006/11/30/how-to-manually-adjust-the-macbook-pro-keyboard-backlight/
Not sure on programs controlling individual keys, but as that would require additional hardware to be installed on Mac's part, i doubt it.
Well after trawling the webs, it looks like the answer to that is no. But I'd like to point out that each key does have its own key- a tiny little LED (same kind they use under phone keypad buttons). Also, I've seen some people saying that flashing these lights on and off repeatedly is bad for them. Bullshit- all digital electronics control light output from LED's by flashing on and off many many times a second. Read up on PWM on wikipedia or something..
Anyways just had to get that out there :)
Thanks,
Nic

User interface paradigms that need changing?

Often times convention is one of the most important design consideration for user interface. Usually the advice goes to do it like Microsoft does.
This is for three reasons:
If it ain't broke, don't fix it.
If your users expect to click on a floppy disk icon to save, don't change the icon (even though some of them may have never seen an actual floppy disk).
Users don't want to re-learn the interface (and hot keys, etc.) with each different application they use.
At the same time Emmerson said "*A foolish consistency is the hobgoblin of little minds.*" So when does maintaining a consistent user interface cross the line from a good idea to stagnated innovation?
Microsoft shook up the good old WIMP GUI with the introduction of the tool bar, and then again with the Ribbon control (which is the natural evolution of the tool bar, like it or not.) Now we are seeing ribbons everywhere.
So my question is, what are some user interface paradigms that are accepted and consistent across multiple applications, but have stayed past their prime and are starting to reek? Are there some important changes that would benefit from a grass roots push by developers to innovate and improve the user interface experience for our users?
One thought that came to mind for me is the modal pop-up dialog. You know the ones that say: "Are you sure you want to . . .. - [Yes] [No] [Cancel] [Maybe]" and its evil twin "Successfully completed what you wanted to do! [OK]." We are seeing a movement away from these with the "info panel" in browsers. I think they need to be adopted in windows application development as well.
If possible please list a solution for each stale UI item.
And please don't list clippy. We all know he was a bad idea.
NOTE: This is specifically Windows client user interface paradigms, but I am certainly open to drawing inspiration from the web, the Mac, etc.
You mentioned popup modal dialogs , and I'd argue that non-modal ones are just as bad. Any dialog box remove focus from the program, they could end up behind the program and make it hard to find it, they might not even appear on the same virtual screen.
I'd like to see an end to all dialog boxes. If you need to stop someone from using the UI because of some non-normal circumstance, then remove the relevant parts of the UI from the window, and replace it with what the dialog would contain. Bring back the UI once the problem has been handled.
Clicking things on touch interfaces
It's incredibly difficult to click on things on a touch interface, because you don't know when you have pressed the screen hard enough. And if you add an animation to the button you are clicking, you most likely wont see it, because your finger is in the way. Adding other reactions, like vibrating the phone or painting waves on the screen might work, but there is usually a delay which is too large, much larger than the tactile sense of a button being pressed. So until they invent a screen with buttons that can be pressed, all touch devices should move towards dragging user interfaces (DUIs) instead.
Counter intuitively it is easier to press an object on the screen, drag it, and then release it than it is to just press and release it. It's probably because you can see the object moving when you start dragging, and you can adjust the pressure while dragging it. Dragging also has a lot more options, because you now have a direction, not just a point that you clicked. You can do different things if the user drags the object in different directions. Speed might also be used, as well as the point where the user releases the object. The release point is the real strength of DUIs, because it is very easy to release something, even with pixel precession.
Some designs have started to use DUIs, like (here we go) the iPhone, palm pre and android phones. But only part of their design is DUI, the rest is clicking. One area they all have in common is the keyboard. Instead of clicking on a key the user presses any key, then drags their finger towards the key they really wanted to click. Unlocking these phones also uses dragging.
Other easily implemented DUI features would be things like mouse gestures, where dragging in different directions, or drawing different shapes does different things. There are also alternate keyboards being researched which puts a bigger emphasis on dragging. All buttons can be changed into switches, so have to drag them down a bit to click them. With a well designed graphics, this should be intuitive to the user as well.
The Apple Human Interface Guidelines are a good read on this topic. They discuss this from a very broad point of view and the guidelines apply to any platform, not only Mac.
The file system. I want to save a file.. >OOOPs I need to think of a file name first. Well.... how about ... blah.doc.
6 months later...
Where the %#*(%& * did I save that %()#*()*ing file?
The solution is build a versioning system into the application, or better, the OS. Make files findable by their content, with a search engine, instead of forcing the user to come up with a memorable name, when all they want is for their file to not get lost.
Eliminate the save step. Type something in to the application, and it's just there, and there's no risk of losing it by some misstep, like forgetting to save. If you want an older version, you can just pick a date and see what the document looked like back then.
To build on the search engine idea: It's a pain having to navigate some arbitrary tree structure to find your stuff. Searching is much easier. However, you might still want to have something like a "folder" to group multiple files together. Well, you can build a richer metadata system, and have a "category" or "project" field, and setup the search engine to show items by project, or by category. Or group by those, or whatever new UI discovery we make next.
This question is a bit too open-ended, IMHO.
However, my main approach when designing anything is:
Fits in to wherever it is. If it's a windows app, I copy MS as much as a possible
It's simple.
It provides options
Buttons have a nice description of what the result of clicking will be, as opposed to 'yes or 'no'
Harder to answer the rest of your post without spending hours typing out an arguably useless (and repeated) set of guidelines.
In my mind, the one thing that really stands out is that USERS need more and easier control over the application's user interface appearance and organization.
So many interfaces can not be modified by the user so that the most used/favorite functions can be grouped together. This ability would make your favorite software even easier for you to get things done.
Error messages need a "Just do it!" button.
Seriously, I really don't care about your stupid error message, just DO WHAT I TOLD YOU TO DO!!!
I think the entire Document model of the web needs to change. It's not a user interface, but it leads to many, many bad user interfaces.
The document model was a good idea to connect a bunch of documents, but now the web is also a collection of applications. Today, I think the Page/document model corrupts our thinking. We end up lumping things together that aren't related, modularizing our code wrong, and in the end confusing users with our monolithic control board type websites.
Find dialogs that sit over the widget in which you are doing the search are terrible. Loads of apps do that. The find bar in Firefox works much better.
Many applications have multiple panes within the UI - eg in Outlook there's the preview pane and the inbox pane (amongst others). In these applications typically cursor key presses apply to the currently focussed pane. But there's very poor hinting to show the user which pane has focus and there are seldom keyboard shortcuts to move the focus between panes.
The focussed pane should be highlighted somehow.
Something like alt+cursor keys should move the focus around.
Ctrl-Tab and Ctrl-Shift-Tab cycle left and right through tabs instead of MRU behavior, even though in most cases the same behavior is duplicated with Ctrl-PageUp and Ctrl-PageDown.
There are a lot but here's an idea for a couple of them:
Remove some clicks like in "add another" or "search item" and the like.
This is well done with interfaces like ajax which have autocompletes ( and auto search ) but is slowly being adopted for platform UI's ( and in some cases they were originated in platform UI's. )
This is how StackOverflow does it for some scenarios.
But of course, we all know that already don't we? No need for "Seach tag" or "Add another tag" buttons, they just happen
Dialogs as you described.
Guys at Humanized proposed Transparent messages which actually are used in their product Enso and some other places.
Mac uses them for notifications ( like in Growl ) use them very well, or Ubuntu new notification system.
alt text http://blogs.sun.com/plamere/resource/NowPlayingGrowl.png
Firefox replaces the traditional "Search" dialog box with a search bar at the bottom.
Although not everyone likes the placement for next/previous as in this screenshot
And even SO ( again ) :) replace the notification with the yellow bar.
Finally:
File managers
I really like ( sometimes ) the simplicity of regular file managers, but some times I would like to work faster/better with them.
If you compare IE 4 with IE 8 you can tell the advance ( even better compare IE 4 with Google Chrome )
But if you compare Windows 95 Explorer with Win XP they are almost the same!! ( Win Vista/7 is a step forward )
But I wonder: Why haven't file managers improved as much as webbrowsers?
That's one reason I like stuff like QuickSilver but it is just a step. Much work is needed to create something like a "Perfect program launcher" or (FileManager/DesktopSearcher etc as you wish )
QuickSilver featuring "move to" action

Resources