What is the rationale behind Most-Recent-Order for tab switching? - user-interface

I can't understand the reasoning behind Most-Recent-Order (how Windows sorts windows when switching via Alt+Tab) when used for tab/window/document/task switching. The way Firefox does tab switching (tabs stay in a consistent order, Ctrl+Tab/Ctrl+Shift+Tab for moving to the next/previous tab) seems much more natural than switching in chronological order.
When there are more than ~5 tabs or windows, I quickly forget the chronological order in which they were opened. So the sequence of switching between them becomes difficult to predict. Even if I remember which tab was active before the current one, and which one was active prior to that, it just takes a lot of key strokes to switch to these. More than if I could simply use direct order as in Firefox or Chrome.
Is there any rational reason to use MRO in an application apart from backward compatibility (for users accustomed to the old hotkeys and usage patterns)?
Why is it still used in Windows for Alt+Tab switching between applications?

It's used because often you're switching between two applications so it's a simple Alt+Tab to switch back and forth. If it didn't use a most recently used order it could be mulitple keypresses to switch back and forth between the two apps I'm interested in at the moment.
Similarly, in an IDE, I could be switching back and forth between two files and don't want to hit Ctrl+Tab lots of times to keep making the same switch.

I appreciate it in Windows Alt-Tab switching, but hate it for situations where tabs are visually present at the top of the screen. When I can see the tabs, I want to switch between them in the order I can see them, since I can probably at a glance (and without any conscious effort) count the number of times I need to press tab; and generally I'm working with few enough tabs (I'm an avid hater of clutter) that this works for me.
In Windows, when the alt-tab dialog (if dialog is the appropriate term) doesn't persist, I want to use MRO for switching.

People usually work well with a few items at a time, and the attention span allows us to better concentrate onto a couple of recent pieces of information rather than onto lengthy lists. Think of it in terms of caching this recent information: if you've used something in the last couple of seconds, and switched away from it for a moment, it's likely that it will be the first thing that you'll want to use again when you switch.

Related

Are Keyboard shortcuts mandatory for 508 compliance

I researched a lot on this and seem to be getting conflicting answers on SO and all of the web. I understand that with Section 508 that compliance DOES NOT equal accessibility.
Biggest thing is that the UI/UX designer is being told that keyboard shortcuts for the dropdown menu NEEDS to have keyboard shortcuts to be 508 compliant. I see Windows Forms applications having this, but for web development I do not think that is mandatory to be "compliant"
My other question that was answered is here: MVC 4 site 508 compliant
I partially agree with thinice, but agree with the first two sentences of the comment left.
The sentences I am referring to are:
They should be -reachable- by keyboard for 508. I'm maintaining emphasis on the difference between a shortcut and being reachable
Crixus said:
Biggest thing is that the UI/UX designer is being told that keyboard shortcuts for the dropdown menu NEEDS to have keyboard shortcuts to be 508 compliant.
You need to clarify this. Do you mean a simple <select> or a drop down for a navigation menu? As Thinice stated in comments, Section 508 just says needs to be reachable. The question becomes:
how are you adding shortcut keys to your application? Are you adding them via the accesskeys attribute or how Gmail/Yahoo Mail adds shortcut keys?
I thought I did an answer about AccessKeys, but cannot find it. Essentially accesskeys sounds like a great thing, but if you look at the keys you are allowed to use that do not interfere with either browser or Assistive Technology keys, you are quite limited. Gez Lemon did an overview of AccessKeys, and their issues. If you want to do the Yahoo!Mail approach, you have to do a bit more work. Todd Kloots made a presentation about ARIA, which may be helpful. Which leads me into the second part. If you are using JavaScript heavily on a site to do stuff, people use both 1194.21 (software application/OS) and 1194.22 (web) standards to evaluate a site. If the site uses JS to make a navmenu (YUI menu example), the drop down behavior needs to be reachable by keyboard. I would say this falls under:
§ 1194.21 Software applications and operating systems.
(a) When software is designed to run on a system that has a keyboard, product functions shall be executable from a keyboard where the function itself or the result of performing a function can be discerned textually.
AND
(c) A well-defined on-screen indication of the current focus shall be provided that moves among interactive interface elements as the input focus changes. The focus shall be programmatically exposed so that assistive technology can track focus and focus changes.
I say both standards are used because (a) says you have to be able to get into the navigation area via the keyboard. (c) comes into play because some menus you can tab to all of the parent items, but you cannot get into the drop down part without a mouse. I have seen menus that you can tab to the sub-menu items, but the menu does not pop open. So if you just use the keyboard (mobility imparments), versus using JAWS, you will have no idea where you are.
I see Windows Forms applications having this, but for web development I do not think that is mandatory to be "compliant"
I would say actual applications, like Word, Outlook, etc., supply shortcuts to frequently used commands. If you are doing this for a web application, I would think about how many you do. This is not a mandatory piece to be compliant. If you are making like a navigation bar, I would recommend using ARIA roles, specifically role="navigation", on the parent element as a best practise.
The problem with some standards (as well as many laws) are that they're open to interpretation...
The only mention I can find in the 508 standards that mentions keyboard use is this (verbatim):
Subpart B -- Technical Standards
§ 1194.21 Software applications and operating systems.
(a) When software is designed to run on a system that has a keyboard, product functions shall be executable from a keyboard where
the function itself or the result of performing a function can be
discerned textually.
My spin on this is:
A keyboard shortcut for navigation options may be impractical given the amount of operations/features a given section may contain. It is important that they're reachable -somehow- via keyboard.
From a UX standpoint, key features should have shortcuts "just because" it's good UX practice. But to shortcut everything goes from one ditch into the other.
508 != accessibility, but if you work for a gov/edu, chances are it's in your PD to be compliant.
Another end of the spectrum is the WCAG which is pretty much coupled with 508 compliance, and in my book better defined: Keyboard stuff is under 'operable' in WCAG.
In a nutshell:
It's good practice for UX to have custom keyboard shortcuts for important features. But has no bearing on 508 compliance by itself. (With exception that functionality should be reachable by keyboard -somehow-).
There are levels of 508 compliance, if you're talking about a government project. Some departments assign 508 scores to their developers, and it factors into your score for future contracts. 508 Compliance only requires that everything is reachable by keyboard, which is usually true, in a way. Screen readers will read everything that's not hidden, and tab keys will take people through links. But if you want a good score, you must address the intent and not only the letter of the law.
Edit: Screen readers will read some hidden elements. One method is to absolutely position an item above the screen with a negative top position. Another is to use the clip property.
http://adaptivethemes.com/using-css-clip-as-an-accessible-method-of-hiding-content/
But if you're using display:none, heights of zero, and javascript toggles, many screen readers will not speak these items.
In the case of a drop-down, you are actively hiding elements from screen readers etc, so you do have to fix it, because most readers won't hear things with display:none.
You will not find definitive documentation on keyboard navigation. The reason no one will specify exactly what to do, is that there are so many potential conflicts - with the browser, the OS, etc. There are also no standards, although Aria is making progress:
http://www.w3.org/TR/wai-aria-practices/#keyboard
I would not put accessKeys on a menu, if that's what you meant.
Instead see: http://www.w3.org/TR/wai-aria-practices/#aria_ex_widget
I would save actual accessKeys for major things like 'Search' and 'Home'. Adding a learning curve to your site wouldn't help the cause, if you had an accessKey for everything. If you put for example, "About Us" accessKey=A, and you had 20 accessKeys assigned to letters, it would be bad.
I've been doing 508 sites for a long time, and personally, I just don't use drop-downs. It's far simpler to add subpage menus. And I personally hate clicking on dropdowns. Dropdowns require a precision in clicking that just irritates me, and doesn't help with accessibility, because remember accessibility also includes people who don't click very well. Plus, dropdowns are limited in the number of levels you can have, not technically but from a UX view.
What I use:
Tab indexes.
Carefully placed menus so that a user won't get a huge list of links before hearing the basic idea of the site or page.
On some projects, tree menus with matching arrow-key page navigation, sequentially.
Accesskeys H for home and S for search, if needed.
The problem especially is in sorting information. Think how quickly you scan a long list of links, and then imagine sitting there and waiting for it to be read to you. Perhaps, organize your content into digestible pieces & let the search box do the scanning. Depends on the content.
Luck. :)

Readymade Cocoa Spotlight UI Components

I'm new to developing on the Mac and am looking to implement an interface similar to Spotlight's - the main part which seems to be an expanding table/grid view.
I was wondering if there is a component Apple provides for creating something like this or is available open source else where.
Of course if not I'll just try and work something out myself but it's always worth checking!
Thanks for your help in advance.
New Answer (December, 2015)
These days I'd go with a vertical stack view ( NSStackView ).
You can use its hiding priorities to guarantee the number of results you show will fit (it'll hide those it can't). Note, it doesn't reuse views like a table view reuses cell views, so it's only appropriate for a limited number of "results" in your case, especially since it doesn't make sense to add a bunch of subviews that'll never appear. I'd go so far as to say outright you shouldn't use it for lists of things you intend to scroll (in this case, go with a table view).
The priority setting can be used to make sure your assumption of what should be "enough" results doesn't cause ugly layout issues by letting the stack view "sacrifice" the last few.
You can even emulate Spotlight's "Spotlight Preferences" entry (or a "show all" option) by adding it last and setting its priority to required (1000) so it always stays put even if result entries above it are hidden due to lack of space.
Lately all my UI designs for 10.11 (and beyond) have been making heavy use of them. I keep finding new ways to simplify my layouts with them. Given how lightweight they are, they should be your go-to solution first unless you need something more complex (Apple engineers stated in WWDC videos they're intended to be used in this way).
Old 2011 Answer
This is private Apple API. I don't know of any open-source initiatives that mimic it off-hand.
Were I trying to do it, I might use an NSTableView with no enclosing scroll view, no headers, two columns, right-justified lighter-colored text in the left column, the easily-googled image/text cell in the right column, with vertical grid lines turned on. The container view would observe the table view for frame changes and resize/reposition accordingly.
Adding: It might be a good idea also to see if the right/left justified text (or even the position of the columns) is different in languages with different sweep paths. Example: Arabic and Hebrew are read right-to-left. Better to adapt than to say "who cares" (he says flippantly while knowing full well his own apps have problems with this sort of thing :-)). You can test this by making sure such languages are installed on your computer, then switching between them and testing out Spotlight. Changing languages shouldn't pose an issue since the language switching UI doesn't rely on reading a foreign language. :-)

Where does the delete control go in my Cocoa user interface?

I have a Cocoa application managing a collection of objects. The collection is presented in an NSCollectionView, with a "new object" button nearby so users can add to the collection. Of course, I know that having a "delete object" button next to that button would be dangerous, because people might accidentally knock it when they mean to create something. I don't like having "are you sure you want to..." dialogues, so I dispensed with the "delete object". There's a menu item under Edit for removing an object, and you can hit Cmd-backspace to do the same. The app supports undoing delete actions.
Now I'm getting support emails ranging from "does it have to be so hard to delete things" to "why can't I delete objects?". That suggests I've made it a bit too hard, so what's the happy middle ground? I see applications from Apple that do it my way, or with the add/remove buttons next to each other, but I hate that latter option. Is there another good (and preferably common) convention for delete controls? I thought about an action menu but I don't think I have any other actions that would go in it, rendering the menu a bit thin.
Update I should also point out that delete should be an infrequent option - the app is in beta so users are trying out everything. This is a music practise journal, so creating new things to practise happens every so often (and is definitely needed when you start out using the app), but deleting them is not so frequent.
Drew's remark is always your first consideration. All other things being equal, I'm not a fan of making deletion as easy as creation; it's a dangerous and comparatively rarer action, and the UI should reflect that fact. However, not having an explicit delete control can indeed lead to support enquiries (the same happened in MoneyWell after the minus buttons were removed). The issue is that you won't hear from the people who avoided accidental deletion by hitting a too-close-to-the-plus deletion control; those people are happy and quiet. You will, however, hear from those who can't immediately find a button to click for deletion, even though almost all of Apple's applications have no such control.
If you feel that you need explicit UI for deletion, I think you can find a middle ground. The problem with deletion controls is accidental triggering, and the conventional "solution" to that problem is a confirmation alert. The problem with that is how intrusive and jarring they are, because they're modal. iPhone OS can teach us a lesson here: you can make confirmation entirely contextual and non-modal.
Examples are row-deletion (swipe to put the row into its "are you really sure you want to delete?" state, which visually tends to slide a red Delete button into view), then interact again (by tapping Delete) to actually confirm the action. There's a similar model on the App Store whereby tapping the price button changes it into a Purchase button; it's essentially an inline, non-modal confirmation. The benefit is that if you tap anywhere else (or perhaps wait a while), the control returns to its normal state on its own - you don't need to explicitly dismiss it before continuing work.
Perhaps that sort of approach (non-modal change as a sort of inline confirmation) can get rid of the support queries by making deletion controls explicit, but also patch up some of your reasonable concerns about intrusive confirmation.
I would say this depends on how important deletion is to the particular task. Is it something that the user has to do often, or very rarely. If it is rare, delete should just be left as an Edit menu option, and perhaps as backspace (Why cmd-backspace? If you can just have backspace, you probably won't get as many queries.)
As with everything in interface design, my take is to apply an 80-20 rule. If something belongs to the 20% of most used functionality, it should be exposed directly in the interface. If it is in the other 80%, you can hide it deeper (eg in a menu, action menu etc).
A + button is definitely in the top 20% --- you can't do anything without it --- whereas a delete is usually not a common operation, and is destructive, so can probably better be hidden away a bit.
The usual solution to this problem is to put the [+] and [-] buttons next to each other (see, for example, the Network pane in System Preferences). I generally find those buttons large enough that I don't hit the wrong one by mistake, although I can see that potentially being a problem.
If that option doesn't suit you, maybe take inspiration from Safari: put an 'x' inside the selected (or hovered) item.
Since your app supports undoing of deletion, I would suggest that you err on the side of making deleting stuff easy (at the expense of making it too easy) and make it obvious that these mistakes are easily undo-able. GMail does a decent job of that.
HTH.
How frequently is delete needed? Does the data and the user's expectation encourage deleting this data often? (is it a list of tasks, for example)? If so i'd certainly include a contextual action menu, even if Delete was the only option.
Cmd + Backspace may be a little unusual for people too - I know it's used in other places on OSX, but those places also provide context menus to expose the delete - i'd be surprised is every user knows about Cmd + Backspace, so i'd probably change it to Backspace (you do have undo support, so you're covered there).
Finally, and hopefully I don't sound like a git, but it suggests that the built-in help doesn't offer enough guidance on this - might be worth revising it?
Matt gave pretty much the same answer I was going to write.
Note that when you delete the object, you should animate it away: this provides valuable visual feedback: the animation (about 1/3 of a second is good) is long enough to catch the user’s eye, and they’ll see the object disappearing. If the object just disappeared without animating, the user would notice that something had changed instantaneously in the list, but would be less certain what it was. The animation reinforces the meaning of the delete button in the user’s mental model.

User interface paradigms that need changing?

Often times convention is one of the most important design consideration for user interface. Usually the advice goes to do it like Microsoft does.
This is for three reasons:
If it ain't broke, don't fix it.
If your users expect to click on a floppy disk icon to save, don't change the icon (even though some of them may have never seen an actual floppy disk).
Users don't want to re-learn the interface (and hot keys, etc.) with each different application they use.
At the same time Emmerson said "*A foolish consistency is the hobgoblin of little minds.*" So when does maintaining a consistent user interface cross the line from a good idea to stagnated innovation?
Microsoft shook up the good old WIMP GUI with the introduction of the tool bar, and then again with the Ribbon control (which is the natural evolution of the tool bar, like it or not.) Now we are seeing ribbons everywhere.
So my question is, what are some user interface paradigms that are accepted and consistent across multiple applications, but have stayed past their prime and are starting to reek? Are there some important changes that would benefit from a grass roots push by developers to innovate and improve the user interface experience for our users?
One thought that came to mind for me is the modal pop-up dialog. You know the ones that say: "Are you sure you want to . . .. - [Yes] [No] [Cancel] [Maybe]" and its evil twin "Successfully completed what you wanted to do! [OK]." We are seeing a movement away from these with the "info panel" in browsers. I think they need to be adopted in windows application development as well.
If possible please list a solution for each stale UI item.
And please don't list clippy. We all know he was a bad idea.
NOTE: This is specifically Windows client user interface paradigms, but I am certainly open to drawing inspiration from the web, the Mac, etc.
You mentioned popup modal dialogs , and I'd argue that non-modal ones are just as bad. Any dialog box remove focus from the program, they could end up behind the program and make it hard to find it, they might not even appear on the same virtual screen.
I'd like to see an end to all dialog boxes. If you need to stop someone from using the UI because of some non-normal circumstance, then remove the relevant parts of the UI from the window, and replace it with what the dialog would contain. Bring back the UI once the problem has been handled.
Clicking things on touch interfaces
It's incredibly difficult to click on things on a touch interface, because you don't know when you have pressed the screen hard enough. And if you add an animation to the button you are clicking, you most likely wont see it, because your finger is in the way. Adding other reactions, like vibrating the phone or painting waves on the screen might work, but there is usually a delay which is too large, much larger than the tactile sense of a button being pressed. So until they invent a screen with buttons that can be pressed, all touch devices should move towards dragging user interfaces (DUIs) instead.
Counter intuitively it is easier to press an object on the screen, drag it, and then release it than it is to just press and release it. It's probably because you can see the object moving when you start dragging, and you can adjust the pressure while dragging it. Dragging also has a lot more options, because you now have a direction, not just a point that you clicked. You can do different things if the user drags the object in different directions. Speed might also be used, as well as the point where the user releases the object. The release point is the real strength of DUIs, because it is very easy to release something, even with pixel precession.
Some designs have started to use DUIs, like (here we go) the iPhone, palm pre and android phones. But only part of their design is DUI, the rest is clicking. One area they all have in common is the keyboard. Instead of clicking on a key the user presses any key, then drags their finger towards the key they really wanted to click. Unlocking these phones also uses dragging.
Other easily implemented DUI features would be things like mouse gestures, where dragging in different directions, or drawing different shapes does different things. There are also alternate keyboards being researched which puts a bigger emphasis on dragging. All buttons can be changed into switches, so have to drag them down a bit to click them. With a well designed graphics, this should be intuitive to the user as well.
The Apple Human Interface Guidelines are a good read on this topic. They discuss this from a very broad point of view and the guidelines apply to any platform, not only Mac.
The file system. I want to save a file.. >OOOPs I need to think of a file name first. Well.... how about ... blah.doc.
6 months later...
Where the %#*(%& * did I save that %()#*()*ing file?
The solution is build a versioning system into the application, or better, the OS. Make files findable by their content, with a search engine, instead of forcing the user to come up with a memorable name, when all they want is for their file to not get lost.
Eliminate the save step. Type something in to the application, and it's just there, and there's no risk of losing it by some misstep, like forgetting to save. If you want an older version, you can just pick a date and see what the document looked like back then.
To build on the search engine idea: It's a pain having to navigate some arbitrary tree structure to find your stuff. Searching is much easier. However, you might still want to have something like a "folder" to group multiple files together. Well, you can build a richer metadata system, and have a "category" or "project" field, and setup the search engine to show items by project, or by category. Or group by those, or whatever new UI discovery we make next.
This question is a bit too open-ended, IMHO.
However, my main approach when designing anything is:
Fits in to wherever it is. If it's a windows app, I copy MS as much as a possible
It's simple.
It provides options
Buttons have a nice description of what the result of clicking will be, as opposed to 'yes or 'no'
Harder to answer the rest of your post without spending hours typing out an arguably useless (and repeated) set of guidelines.
In my mind, the one thing that really stands out is that USERS need more and easier control over the application's user interface appearance and organization.
So many interfaces can not be modified by the user so that the most used/favorite functions can be grouped together. This ability would make your favorite software even easier for you to get things done.
Error messages need a "Just do it!" button.
Seriously, I really don't care about your stupid error message, just DO WHAT I TOLD YOU TO DO!!!
I think the entire Document model of the web needs to change. It's not a user interface, but it leads to many, many bad user interfaces.
The document model was a good idea to connect a bunch of documents, but now the web is also a collection of applications. Today, I think the Page/document model corrupts our thinking. We end up lumping things together that aren't related, modularizing our code wrong, and in the end confusing users with our monolithic control board type websites.
Find dialogs that sit over the widget in which you are doing the search are terrible. Loads of apps do that. The find bar in Firefox works much better.
Many applications have multiple panes within the UI - eg in Outlook there's the preview pane and the inbox pane (amongst others). In these applications typically cursor key presses apply to the currently focussed pane. But there's very poor hinting to show the user which pane has focus and there are seldom keyboard shortcuts to move the focus between panes.
The focussed pane should be highlighted somehow.
Something like alt+cursor keys should move the focus around.
Ctrl-Tab and Ctrl-Shift-Tab cycle left and right through tabs instead of MRU behavior, even though in most cases the same behavior is duplicated with Ctrl-PageUp and Ctrl-PageDown.
There are a lot but here's an idea for a couple of them:
Remove some clicks like in "add another" or "search item" and the like.
This is well done with interfaces like ajax which have autocompletes ( and auto search ) but is slowly being adopted for platform UI's ( and in some cases they were originated in platform UI's. )
This is how StackOverflow does it for some scenarios.
But of course, we all know that already don't we? No need for "Seach tag" or "Add another tag" buttons, they just happen
Dialogs as you described.
Guys at Humanized proposed Transparent messages which actually are used in their product Enso and some other places.
Mac uses them for notifications ( like in Growl ) use them very well, or Ubuntu new notification system.
alt text http://blogs.sun.com/plamere/resource/NowPlayingGrowl.png
Firefox replaces the traditional "Search" dialog box with a search bar at the bottom.
Although not everyone likes the placement for next/previous as in this screenshot
And even SO ( again ) :) replace the notification with the yellow bar.
Finally:
File managers
I really like ( sometimes ) the simplicity of regular file managers, but some times I would like to work faster/better with them.
If you compare IE 4 with IE 8 you can tell the advance ( even better compare IE 4 with Google Chrome )
But if you compare Windows 95 Explorer with Win XP they are almost the same!! ( Win Vista/7 is a step forward )
But I wonder: Why haven't file managers improved as much as webbrowsers?
That's one reason I like stuff like QuickSilver but it is just a step. Much work is needed to create something like a "Perfect program launcher" or (FileManager/DesktopSearcher etc as you wish )
QuickSilver featuring "move to" action

Is there anything like Winsplit Revolution for Mac OS X?

Is there anything like Winsplit Revolution for Mac OS X?
Try these:
Zooom/2 ($15) has been my favorite since I installed it. Fast, flexible, and minimizes the number of key combinations I need to remember
Divvy ($15) might soon replace Zoom/2 for me. It's closer to Winsplit. You can arrange windows on a grid, define your own grid arrangements, and define your own shortcuts. It also minimizes the number of keystroke combinations you need to remember. BONUS: There are Mac and Windows versions, which means if you use both platforms you can use the same window management method across all your machines.
Breeze ($8) makes it easy to make windows fullscreen, split left, or split right. It also lets you save screen states (generic) and for specific apps.
Moom ($5) is a more recent entry. It supports both keyboard shortcuts and mouse shortcuts. For the mouse shortcuts, moving the cursor over the greeen zoom button displays a popup list of different layout options: full screen, left/right half, top/bottom half, or any of the corners.
SizeUp ($10) mimics various aspects of WinSplit functionality, but it relies on many keystroke combinations that take time to learn. The advantage is quickly moving windows. The drawback is that it uses up a lot of global keyboard shortcuts, and there are so many I couldn't remember them all.
Cinch ($7) is a mouse-driven app by the makers of SizeUp. Drag your window to various hot zones on the screen edges and the window will "cinch" to that edge and resize to fill half the screen. Similar to the built-in resizing feature in Windows 7.
MercuryMover ($20) is quite powerful and offers fine-grained control. However, there are a lot of different key combinations and, overall, I didn't find it as easy to learn or as elegant as WinSplit. I uninstalled it almost immediately. It struck me as powerful, but inefficient and unwieldy.
The DIY approach (free) mentioned in another post is to combine some applescripts and bind them to quicksilver triggers. I haven't tried this. But it is a free solution.
I found the weak window management one of the hardest things to cope with when I started using a Mac.
Why go beyond spaces and expose?
Winsplit significantly adds to what spaces and expose can do. I didn't understand the appeal until I actually used it. Before that, I thought virtual desktops (ie, like spaces) was enough. Now I consider it must-have functionality, especially on large monitors and multi-mon setups.
On my Windows machine running 3 monitors, I would rank the importance of these different apps in the following order:
Winsplit-like window rearranging
Spaces-like virtual desktops
Expose-like application switching
On my MacBook, I've learned to approach it the other way.
Expose-like application switching
Winsplit-like window rearranging
Spaces-like virtual desktops
From the Winsplit website I understand more or less the functionality; in the past I actually used to have my window manager (Waimea) configured to do exactly that in linux.
You may try using Quicksilver to trigger one of a custom set of applescripts; each applescript would resize and move the currently focused window to a predefined location.
See this macosxhints post for inspiration...
ShiftIt is a free option. Assignable hotkeys to resize to different portions of the screen (Left, Right, Top, Bottom, Top Left, Top Right, Bottom Left, Bottom Right, Full Screen and Center with current size)
Link to ShiftIt on github
Just click on the big download button towards the right of the screen.
Spectacle is a good option, its free and open source. And easy to use with keyboard shortcut :
Windows can be moved to a number of predefined regions of the screen:
Move to the left half ⌥⌘←
Move to the right half ⌥⌘→
Move to the top half ⌥⌘↑
Move to the bottom half ⌥⌘↓
Move to the upper left ⌃⌘←
Move to the lower left ⌃⇧⌘←
Move to the upper right ⌃⌘→
Move to the lower right — ⌃⇧⌘→
Another question on StackOverflow adresses the same issue
https://stackoverflow.com/questions/276760/tiling-window-manager-for-os-x
One answer provided links to an app called TwoUP. It's free, and does the job on OSX!
Thanks to Dong Hoon's answer, I have developed a hybrid solution. Using the AppleScript Editor, you can create scripts to resize the current window, like this:
tell application "System Events"
set _everyProcess to every process
repeat with n from 1 to count of _everyProcess
set _frontMost to frontmost of item n of _everyProcess
if _frontMost is true then set _frontMostApp to process n
end repeat
set _windowOne to window 1 of _frontMostApp
set position of _windowOne to {5, 0}
set size of _windowOne to {1150, 735}
end tell
such a script will work on a 13" MacBook. Using subtle variations of this script saved to /Users/[YourUserNameHere]/Library/Scripts, you can have configure the AppleScript Editor to show itself in the menu bar, where it will allow you to select a script to run.
Using several different scripts, I'm able to resize and reposition any window with only two clicks.
Hope this helps.
It looks like TwoUp is dead, but here are some other options:
Cinch ($7) is like Aero Snap for Mac.
Breeze ($8) allows you to save window states and restore them like a template to another window.
Divvy ($14) shows a grid on the screen where you can select boxes to indicate how you want the window to fill your screen.
I haven't used Winsplit, so I don't know how it compares, but an app I developed, Optimal Layout, offers very flexible window tiling, as well as moving and resizing from the keyboard:
http://most-advantageous.com/optimal-layout/
You can also try Arrange application which features resize and reposition with keyboard shortcuts, on screen menu and by dragging window.
You should also try out secondbar. gives you an extra menubar at the second display + re-arrange options. See this link.
You can even try SplitScreenapp.com. It allows you to resize Mac Windows in many ways including full split, half split, drag and snap, etc.
I doubt it. Between Spaces and Expose, there's not much need for a third-party app to help manage multiple windows.

Resources