I want to control a streaming music website with global hotkeys, so I can use the site's player controls (play/pause/next/etc) while another application has focus. I can use Greasemonkey to do this on the site when the browser has focus. What I can't figure out is a bridge between OS-level hotkeys and Greasemonkey.
Any suggestions?
Edit 2011-02-04:
New method: https://gist.github.com/cc9cf651f341cc938852.
The window switching was becoming a hassle and would glitch out occasionally, so I've added MozRepl to the stack (https://github.com/bard/mozrepl). Same idea, just targets a terminal with a Mozrepl instance, which controls Firefox.
Edit 2011-02-01:
AutoHotKey works well here. I put up a gist at https://gist.github.com/805417 for anyone else whom this might help.
Greasemonkey cannot do this.
I doubt that a Firefox or Chrome extension could do this.
A C, Python, etc. program probably could do it -- maybe coupled with an extension...
I found a bridge: AutoHotKey
This script I whipped up will look for a window with "Hype Machine" anywhere in the title. If successful, it will map ctrl+shift+[e|q|...] to functions that activate the Hype Machine window, send a certain keystroke command then alt+tab, which gets us back to our original window.
#SingleInstance force
SetTitleMatchMode 2 ; Anywhere in title
SetTitleMatchMode Fast
SendMode Input
#IfWinExist Hype Machine
{
^+d::
WinActivate
return
^+e::
WinActivate
WinWaitActive
Send n !{Tab}
return
^+q::
WinActivate
WinWaitActive
Send p !{Tab}
return
}
Related
I haven't find anything relevant in Google or any Microsoft site about it so I decided to ask a question here.
Everybody knows that in Win-based OS there is a virtual keyboard. I also know that *nix based OS, have it too. So, the question is about:
HOW DOES IT WORK INSIDE?
I mean, let's have an example that I opened on screen keyboard in Windows 10. What's the actual difference between:
input via hardware keyboard: when I'm using it, like I press X button
..and using a virtual keyboard, when I press the same button
Imagine, I have an admin access to terminal/computer, is there any option to track/distinguish that in the second time I pressed button not on hardware keyboard, but on-screen (by mouse clicking) version of it?
And there are also many different software, like AutoIt (yes, it's a language, but it's relevant to this example) that emulating pressing the X button. How does they work in Win-based OS? Do they "in-common" with default on-screen keyboard and using the same driver/WinAPI or there is a difference between them?
And the second case, between:
default on-screen keyboard
compilated AutoIt script
..any other software that emulating press X button
I guess the only way to find out "how exactly button was pressed" is to check current processes list via taskmgr and find out have anything been launched or not. Or I'm totally wrong here, and missing something?
THE SCOPE
I have written a node.js script which emulates button pressing behaviour in windows app.
TL:DR business logic short => open notepad.exe and type `Hello world`
And could someone give me any advice/recommend any powershell/bat script (or any other solution) with demonstration of GetAsyncKeyState check behavior? With which I could easily check my own node.js script (not by functional of it, but by triggering press the X button event)
I found an answer for node.js case here: Detecting Key Presses Across Applications in Powershell
SendInput is the preferred method to generate user input in software. The Windows on-screen keyboard probably uses it for everything except Ctrl+Alt+Delete which I believe has some kind of special handling. The on-screen keyboard is only able to generate Ctrl+Alt+Delete in certain configurations.
Software-generated input is merged with normal hardware input in the RIT (Raw Input Thread) in the kernel.
A low-level keyboard hook can detect software-generated input.
I'm using a command-line tool called terminal-share to use the macOS's system sharing service.
The tool's code can be found here:
https://github.com/mattt/terminal-share/
But, there is one little problem with this tool. Since this is a headless command-line application it doesn't have any windows of its own.
And when invoking it (using NSSharingService), it will start a sharing window(the default sharing window), but the sharing window won't have any focus.
So, I must click the post button using the mouse, not by using the CMD + SHIFT + D (or CMD + Enter) to send the share. Since there is no focus(the window that accepts for the key event still is the terminal emulator that starts this application.
This is quite annoying. Is there any better way to fix this?
I've investigated the NSSharingService API, it didn't have any code about the default sharing window. Is there any way just to keep this tool headless, and let the default sharing window become focused when it comes out?
Thanks.
I want to disable / block the mouse click and keyboard typing for 6 seconds after launching a .exe file while displaying a advsplash.
Currently I manage to run a .exe file, activate the splash, block the keyboard and run a second .exe, but then, I need to restart the computer to unlock the mouse/keyboard.
Any idea on how to disable it without restarting the machine ?
This sounds like something you should never do.
If you want to do UI automation Windows already has support for that, using SendInput or keybd_event is not a good idea. Some apps steal foreground focus, this is just a fact and if that happens at the wrong time you end up sending input to the wrong window.
I am using autoit to handle javas script popup code as
autoit.WinWaitActive("[Class:#32770]")
result =autoit.ControlClick("[Class:#32770]","","Button1")
But when I click on the button to open the popup it waits for a longer time & if the user is performing operations on another window, it will no go further. Only when the user clicks on the current window does it work. Means user should be focused on IE browser at the time of javascript popup.
Most tools that work up at the OS UI level (as autoit does) require that the window to be worked on has focus in order to have things like clicks or keyboard input end up in the correct window.
You'll probably want to set the focus first, then try to click, if you are using autoit
There are other methods for dealing with JS popups, especially with more current versions (1.9.0 or above) of watir, which are more elegant. Refer to the Javascript Popups page in the Watir Wiki
Do be aware that most of the solutions you see presume that the browser will have focus. If you need to run scripts at the same time as doing other work and don't want what you are doing to interfere, I might recommend using a virtual machine to run the scripts
I usually have more then 10 opened application windows. When I write code I need to switch fast between a browser, an IDE and terminal windows. Alt + tab is too slow, too many windows to choose from.
Virtual desktop is a work around for me. On a first desktop I keep browser, on a second IDE, etc. So I am able to switch fast between my most important applications.
And the question. Is there an utility for Windows XP / Vista which allows to assign a keyboard shortcut like alt + f1 .. f10 to an opened application window?
UPDATE: All programs I've found allow to define a shortcut to an application. E.g. they will open new instance of Firefox instead of switch to an opened one. Closest to what I need is Switcher. It displays big thumbnails of all open windows with assigned numbers to press.
Autohotkey I've found to be very powerful. Here is a part of my test script.
SetTitleMatchMode, 2
#z::Run http://stackoverflow.com/
^!n::
IfWinExist Notepad
WinActivate
else
Run Notepad
return
!F1::
IfWinExist Firefox
WinActivate
else
Run Firefox
return
!F2::
IfWinExist Commander
WinActivate
return
!F3::
IfWinExist Carbide
WinActivate
return
Just use Win32 api KBS.
There's a fair number of shareware apps for keyboard shortcuts out there. Take a look at Stardock's Keyboard Launchpad, it's supposed to be able to do stuff like that.