System-wide hotkey for custom text field highlighting - windows

I want to create a way of enabling my computer to highlight text in any given text field, of any application, as I type it. The idea is I would press a hotkey and that would cause all text typed after pressing that hotkey to be highlighted until I press the hotkey again. What technologies, if any, could achieve this on a Windows XP or Windows 7 machine? And where do the current text selection behaviors such "live" (e.g. selection using shift+arrow keys, and deselection on key press, etc.)?

It's exceedingly unlikely that you could achieve this.
You can use RegisterHotKey and keyboard hooks to intercept the hot key and subsequent typing. That's not too bad.
What you won't be able to manage is to arrange that the text you type will appear highlighted. You'd have to special case many of the target applications. Applications like web browsers often don't use windowed controls for their input fields. There's no easy way to highlight the text as it is typed into those fields.
Your question used terms like "system-wide" and "any given text field". That's just not a realistic goal. Making this work for a single class of fields in a certain apps sounds more plausible. You may be able to do it when the text is going to land in a windowed edit control. Although even that sounds fraught with potential threading conflicts. You may also be able to make some headway with apps that support UIautomation, but again it doesn't seem like it would be very easy and many apps don't support UIautomation.

Related

Replace special character with keyboard shortcut live on input

Is there any way to replace a special character with a keyboard shortcut live?
For instance: Writing $ would actually press ctrl+n or arrow key left
Every help is much appreciated!
This is primarily speculation with a little experience and research mixed in.
This sort of thing is easy enough if you are checking in an application that currently has focus, but creating a universal keypress hook? Not so much.
I built a C#/C++ program in grad school that intercepted keystrokes intended for another application, but I was only able to do it by waiting for the desired application window to open, auto-opening my own pop-up window to receive the input, and then passing keystrokes back to the original window.
I'm not saying it can't be done, period, but my background knowledge (though slightly dated) and a little cursory research isn't turning up anything in the basic scripting world that would satisfy what you appear to be after.
The only way I know how to do it (which is likely wrong) would be to have hooks in every open application, and when a textbox on the application gained focus give focus to your own text-receiving app. Analyze the keypresses, and then pass the desired text/keypresses on to the original app/textbox. This would require prior knowledge of the "windows" (i.e. all objects) in all possible apps on the machine you're working on, so you would know when a textbox received focus.
If I recall, it might be possible to tell when keys are being pressed (if you have hooks in all apps) and re-direct from there, but you might lose the first keystroke, even then.
Again, this is primarily speculative.

Prevent the SIP/Soft Keyboard from popping up when a TextBox get focus

In my Windows Phone 7 Silverlight application I have my own custom keypad that I want to use instead of the standard soft keyboard. The problem that I have is that I have not found a way to completely disable or prevent the SIP/Soft Keyboard for my application or for the TextBox component.
Is it possible to disable the soft input keyboard in my application?
Is it possible to prevent the soft input keyboard from popping up when a TextBox get focus?
Can I extend or override any functions in TextBox to make it behave the way I want?
I’ve seen solutions how to hide the keyboard when a certain key is entered by moving focus off the TextBox but I want to prevent it from ever showing up.
My problem is very similar to what's stated on How do I prevent the software keyboard from popping up? and How to prevent keyboard to show in EditText onTouch? but for Windows Phone 7 instead.
I am fully aware that some may think it is stupid to use your own keypad instead of the standard input but I have my reasons for doing it this way and I just want to know if it is possible to achieve what's described.
If you don't want to use the SIP, you don't need a TextBox.
Use a TextBlock and bind it to the input generated by the custom buttons.
Have a look at this blog post http://www.silverlightshow.net/items/Windows-Phone-7-Creating-Custom-Keyboard.aspx
Peter, consider using THIS, with customizations. I'm working in a project where we use a custom keyboard. With some extra codes and customizations I've made a custom softkeyboard, as you can see in the screenshot bellow. Right now, my softkeyboard is working properly, but with some issues to be resolved yet.
My custom WP keyboard problems are:
There's no caret cursor;
The TextBox in my screen is a AutoCompleteBox, and when it opens the completions, my keyboard loses focus, and so I need an extra tap (this is my greates problem now)
WP native keyboard try to slide up when I choose an item within the completions
IsHitTestVisble =false solve your issue

How come some controls don't have a windows handle?

I want to get the window handle of some controls to do some stuff with it (requiring a handle). The controls are in a different application.
Strangely enough; I found out that many controls don't have a windows handle, like the buttons in the toolbar (?) in Windows Explorer. Just try to get a handle to the Folder/Search/(etc) buttons. It just gives me 0.
So.. first question: how come that some controls have no windows handle? Aren't all controls windows, in their hearts? (Just talking about standard controls, like I would expect them in Windows Explorer, nothing customdrawn on a pane or the like.)
Which brings me to my second question: how to work with them (like using EnableWindow) if you cannot get their handle?
Many thanks for any inputs!
EDIT (ADDITIONAL INFORMATION):
Windows Explorer is just an example. I have the problem frequently - and in a different application (the one I am really interested in, a proprietary one). I have "physical" controls (since I can get an AutomationElement of those controls), but they have no windows handle. Also, I am trying to send a message (SendMessage) to get the button state, trying to find out whether it is pushed or not (it is a standard button that seems to exhibit that behaviour only through that message - at least as far as I have seen. Also, the pushed state can last a lot longer on that button than you would expect on a standard button, though the Windows Explorer buttons show a similar behaviour, acting like button-style checkboxes, though they are (push)buttons). SendMessage requires a window handle.
Does a ToolBar in some way change the behaviour of its child elements? Taking away their window handle or something similar? (Using parent handle/control id for identification??) But then how to use functions on those controls that require a windows handle?
If they don't have a handle, they're not real controls, they're just drawn to look like controls.
But of course, the toolbar buttons in Windows Explorer do have window handles, they're part of a toolbar. Use the toolbar manipulation functions to interact with them, not EnableWindow.
Or, better yet, use the documented APIs for things like search. Reverse-engineering Windows Explorer has never ended well for anyone, least of all the poor Windows Shell team, saddled with years of backwards-compatibility hacks for certain developers who thought that APIs are for everyone else. Whatever you do manage to get to work is very likely to break on the next version of Windows.
The controls you are talking about are using the ToolbarWindow32 class. If you want to interact with them then you'll need to use the toolbar control APIs/message. For example for enabling buttons you'd want to use TB_ENABLEBUTTON.
You can implement the controls yourself using GDI, OpenGL or DirectX. Try Window Detective on Mozilla Firefox and you will see that there is only one window. Controls in dialog boxes are not windows known to Windows.

Get title of front window in carbon

I am writing a program to sit in the background on osx 10.6, listen to keystrokes and record them, grouping them by window title. (No, I am not writing malicious software. I do not need this program to be sneaky in any way, I just want to have a safety net for when I have typed a huge email and then accidentally refresh the page (APPLE-R) instead of opening a new tab (APPLE-T)) I have already found apple's EventMonitorTest example for the keystroke capturing code, now I just need to find the "key window" title.
Does anyone know where I can find examples for this kind of functionality? Thank you!
A couple of possibilities:
You could use the Accessibility API (though of course keep in mind that 64-bit Carbon does not support this)
You could use the CGWindow functions introduced in Leopard
I suspect the first option will be easier to do this with, since the CGWindow API is somewhat low-level and treats all windows (application windows, menu bars, dock icons, etc.) more or less equally.

Is there still a place for MDI? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Even though MDI is considered harmful, several applications (even MS Office, Adobe apps) still use it either in its pure form or some as a hybrid with a tabbed/IDE-like interface.
Is an MDI interface still appropriate for some applications?
I'm thinking of an application where one typically works with several documents at one time, and often wants to have multiple documents side to view or copy/paste between them.
An example would be Origin, where one has multiple worksheet and graph windows in a project; a tabbed or IDE-like interface would be much more inconvenient with a lot of switching back and forth.
On the mac, it's natural and convenient for an application to have multiple top-level windows to solve this, what is the preferred way in Windows if one doesn't use MDI?
The disadvantages of MDI are the following:
It generally requires the user to learn and understand a more complicate set of window relations.
Many simple actions can require a two-step process. For example, bringing a desired window to the foreground can require that the user first brings the container window forward then bring the right primary window in the container window forward. Resizing or maximizing a window can mean first adjusting the container window then the primary window within.
If multiple container windows are open, the user can forget which one has the desired primary window, requiring a tedious search.
Users are easily confused by the dual ways to maximize, iconify, layer, and close a window. For example they may close the entire app rather than a window within the container window. Or they may “lose” a window because they iconified it within the container window without realizing it.
The user is limited in the sizes and positions his or her windows can assume. Suppose I want to look simultaneously at 3 windows of one app and 1 from another app. With SDI, I can have each window take a quadrant of the screen, but I can’t do that with MDI. What if I want one window in the MDI to be large and the other small? I have to make the container window large to accommodate the large window (where it occludes the windows of other apps), but that wastes space when looking at the child.
Note that all of these disadvantages apply to tabbed document interfaces (TDI) too, with tabbed interfaces having the additional disadvantage that the user can’t look at two documents in the same container window side by side. Tabs also add clutter and consume real estate in your windows. However, overall TDI tend to be less problematic than MDI, so they might be preferred for special cases (read on)
In summary, it hard to think of any situation to use MDI. It’s no better than an SDI while adding more complexity and navigation overhead, and working poorly with the windows of other apps.
There’s no reason an app can’t have multiple top-level SDI windows. Even with an app like Origin, I don’t see a problem with a project being spread across multiple SDI windows as long as the project is well identified in each window. SDI also allows different kinds of windows (e.g., graph vs worksheet) to have different menus and toolbars, rather than hiding or disabling items depending on the active window (the former is confusing, and the latter wastes space).
SDI gives your users the more flexibility over either MDI or TDI. Users can overlap or maximize the windows, and use the taskbar/dock as a de facto tab interface. Users can alternatively resize and reposition the windows so they can look at multiple at once. Each window can be sized independently to optimize screen space. Whatever advantages an MDI or TDI may have, you may be able augment SDI to have those advantages too (e.g., provide a thumbnailed menu that makes switching among windows faster than using the taskbar and comparable to selecting tabs, or provide a control that iconifies all windows of an app with one click).
Unless you have compelling reasons to use a TDI, go SDI to allow this flexibility. Compelling reasons include some subset of the following:
Each tab is used for unrelated high-order tasks and user will not be switching among tabs frequently or comparing information across tabs.
You’re working with very low-end users who are ignorant of or confused by the taskbar/dock and multiple windows, and don’t know how to resize windows (it seems compelling tab imagery works better than the taskbar for such users).
You anticipate there’re typically be a large set of tabs (e.g., 4 or more) and you can control their display in a manner more effectively for the task than the OS can if they were SDI windows on the taskbar/dock (e.g., with regard to order and labeling).
With SDI, you’re having problems with users confusing the toolbars or palettes of inactive windows with active windows.
Tabs are fixed in number and permanently open (e.g., when each tab is a different component of the same data object). The user is not saddled with trying to distinguish between closing a tab and closing the entire window; figuring out the window to navigate to is not an issue because all windows have the same tabs.
There is really only one way to properly arrange the data for task, with no variation among users or what they actually use the app for. You might as well set it up for the user with a combination of tabs and master-detail panes and rather than relying on the user to arrange and size SDI windows right.
In summary, given your users’ abilities, app complexity, and task structure, if your app can manage the content display better than the user/OS, use TDI, otherwise use SDI.
Note that the examples you used (MS Office and Adobe applications) are big programs and have lots of features. Users will be dealing with that program, and only that program for much of the program's lifetime.
Newer versions of MS Office (2007) and Adobe Photoshop (CS4) use multiple windows and tabs, respectively.
Note that with Windows 7, MDI's will probably lose popularity even more because of the extra power of tabs given by Microsoft's API's (although you needn't strictly use tabs -- MDI windows could work, but would be more confusing for the user than usual).
The old-style MDI (where to switch between documents, you had to go through the Windows menu) was annoying. The newer MDI (like tabs in Opera and Mozilla) make switching between documents very easy and seem to have been accepted well. They also don't clutter your taskbar as happens if you had more than one document open in something without MDI.
The main advantage of MDI is when you want to keep track of two or more windows at the same time, and those windows need to be grouped together. For example, there's a running process in one window, but you need to work on another window, MDI would be the most ideal.
I agree with slavy13 (old-MDI = bad, new-MDI = much better). But don't use programs like Microsoft Excel as your model. Ick! You get one window on your desktop, regardless of how many spreadsheets you have open (which may or may not be your preference.) But you get one taskbar icon for each and every document you have open. And your Alt+Tab window similarly has one icon for each document you have open. Plus, there is an additional icon in there just for "Excel" which takes you to whichever document happens to be "current". So yeah, do your MDI like Mozilla. Or at least give your users the option of switching to the cleaner style.
To more succinctly answer your question, I feel the answer is yes, MDI is still appropriate in some instances. But, in all things, moderation is the key.
It appears that multiple top-level windows is the way to go. As for whether there should be one global app instance or one per document is up to you I think. It's not visible to the user.
Only one benefit for MDI:
Programs that use large amounts of resources, such as Adobe Photoshop, often have an MDI due to the prohibitive cost of running more than one instance at a time.
But you shouldn't develop programs that drain huge amounts of resources to start with.
One advantage I can see for MDI occurs if a lot of screen real estate is going to be used for stuff that's shared among many windows. It may be more logical to have such material at the top or side of the enclosing window than to have it repeated in each SDI window, or have it appear in a window entirely separate from the SDI windows. For example, a chat program might have a status pane and a control pane. Having those be somewhat tied visually tied to the chat windows might be better than having them as standalone windows.

Resources