In awesomewm is it possible to "pass through" all key events to an application? - x11

I use awesomewm with chrome remote desktop which is connected to a linux box with awesomewm as well. For the chrome remote desktop app, I'd like to send all keybindings down to the application. Instead, they get intercepted by the host.
Is it possible to enter a "passthrough" mode (when an application window is active) and send all events down to the application?
When I click on a host window, I want the host to intercept the global keybindings again.

You can just remove all global key bindings with root.keys({}), I think. Additionally, c:keys({}) removes the per-client key bindings. That should get of anything that AwesomeWM intercepts.
Getting your key bindings back is just a root.keys(your-key-binding-table) away. (Same for the client key bindings, but I guess you don't want to restore them anyway).

Related

Run a program on: hard "Power" button press

I want to run a program (exe file) when the Power button of my laptop is pressed. (Not when system is shutting down)
I tried getting its keycode using c# and js, but none of them capture this keypress as they only capture keyboard buttons. Look at the drop-down menu I have opened:
My problem would be solved if they add "Run a specific program..." in this drop-down:
But of course they won't add this option!
So, how do I get it done? Maybe using Task Scheduler?
There's no keycode for the power button. The driver is sitting between your OS and your hardware. When you push the "G" button on your keyboard, the driver translates that to an OS system call representing the "G" key which your program can listen for and intercept. But there's no OS system call for a representing the "power" button. Instead, your driver is translating that to OS system calls to initiate a shutdown, turn off the monitor, etc.
Your laptop driver allows you to configure which system call you want to initiate when the power button is pressed, but that driver is going to be unique to the brand and model of your laptop, and if they don't offer support for capturing that keypress through their driver, then you probably don't have any easy way to intercept it.

The Focus of the Default Sharing Window

I'm using a command-line tool called terminal-share to use the macOS's system sharing service.
The tool's code can be found here:
https://github.com/mattt/terminal-share/
But, there is one little problem with this tool. Since this is a headless command-line application it doesn't have any windows of its own.
And when invoking it (using NSSharingService), it will start a sharing window(the default sharing window), but the sharing window won't have any focus.
So, I must click the post button using the mouse, not by using the CMD + SHIFT + D (or CMD + Enter) to send the share. Since there is no focus(the window that accepts for the key event still is the terminal emulator that starts this application.
This is quite annoying. Is there any better way to fix this?
I've investigated the NSSharingService API, it didn't have any code about the default sharing window. Is there any way just to keep this tool headless, and let the default sharing window become focused when it comes out?
Thanks.

How would i enable an unfocused window to still recieve event listeners?

I am working on a client-server application where the server will depending on buttons clicked, would fire all the events of a keyboard. It is limited to the keyboard as well.
I am making a custom keyboard for my left hand. A friend has only 1 hand so it similar would benefit him. They keyboard will be some sort of touch device, possibly a large tablet, which will through networking and port connections establish a connection with my server.
My server applcation (written in anything really, python, C, C++, java, I know a LOT of languages) will accept a flag, and then execute a set of commands.
Essentially i want to do something like:
if(key_pressed == 'f1')
execute the 'F1' key in focused window.
Server application is NOT focused and will be hidden, or minimized on screen.
Is there a way to go that without building custom drivers or something?
Edit:
Client is like having a keyboard
Server is like the driver. I want the server to fire off things as if it is a character device being manipulated. The thing i was looking into is that i was reading a post or 2 which said only focused applications receive presses, which is fine, i want them to execute in the focused window, but in the background, be a process which will catch and then carry out the signal wanted like in the key_pressed above

Send keypresses across the network

I'd like to be able to send key presses from one computer to the other. I have a voice application on one system which I use for my headset, and the other system is my main system. The voice application uses a Push-to-talk (PTT) system, which I'd rather keep.
So what I'd like to do is press a key on my main system and have it sent across the network to my secondary system. At this stage all I know is how to get the key across the network, the specifics of actually detecting the key press on my main system and emulating the press on the secondary system is my problem.
The key I'd like to capture (when held down) and send to my secondary system is the right control key. I think the best way is to add a keyboard hook.
How can I do this in such a way that I can hit right control in any application on my main system and have this application pick that up and send it? When my secondary system receives the key, how do I send it to the entire system (rather than trying to find a specific application)? I'm fine with using low-level Win32 calls in unmanaged C++, I'd just like to know how to get this to work.
Thanks in advance.
It seems like you're already halfway there to your own custom solution, but as an alternate you might want to check out Synergy an open source keyboard and mouse extender.
I found the answer: I wrote a small keyboard hook to pick up the PTT press, and then send it via the network to the secondary system. The secondary system takes this keypress and uses the SendInput function to inject the key into the system input queue. I just tested it with Teamspeak and it works brilliantly.

How do I get keyboard events in an NSStatusWindowLevel window while my application is not frontmost?

After creating a translucent window (based on example code by Matt Gemmell) I want to get keyboard events in this window. It seems that there are only keyboard events when my application is the active application while I want keyboard events even when my application isn't active but the window is visible.
Basically I want behavior like that provided by the Quicksilver application (by blacktree).
Does anybody have any hints on how to do this?
There are two options:
Use GetEventMonitorTarget() with a tacked-on Carbon run loop to grab keyboard events. Sample code is available on this page at CocoaDev.
Register an event trap with CGEventTapCreate. Sample code can be found in this thread from the Apple developer mailing list.
Edit: Note that these methods only work if you check off “Enable access for assistive devices” in the Universal Access preference pane.
A simpler route that may work better for you is to make your app background-only. The discussion on CocoaDev of the LSUIElement plist key explains how to set it up. Basically, your application will not appear in the dock or the app switcher, and will not replace the current application's menu bar when activated. From a user perspective it's never the 'active' application, but any windows you open can get activated and respond to events normally. The only caveat is that you'll never get to show your menu bar, so you'll probably have to set up an NSStatusItem (one of those icon menus that show up on the right side of the menu bar) to control (i.e. quit, bring up prefs, etc.) your application.
Edit: I completely forgot about the Non-Activating Panel checkbox in Interface Builder. You need to use an NSPanel instead of an NSWindow to get this choice. This setting lets your panel accept clicks and keyboard input without activating your application. I'm betting that some mix of this setting and the Carbon Hot Keys API is what QuickSilver is using for their UI.
Update:
Apple actually seems to have changed everything again starting with 10.5 BTW (I recently upgraded and my sample code did not work as before).
Now you can indeed only capture keydown events setting up an event tap if you are either root or assistive devices are enabled, regardless on which level you plan to capture and regardless if you selected to capture (which allows you to modify and even discard events) or to be listen only. You can still get information when flags have changed (actually even change these) and other events, but keydown under no other circumstances.
However, using the carbon event handler and the method RegisterEventHotKey() allows you to register a hotkey and you'll get notified when it is pressed, you neither need to be root for that nor do you need anything like assistive devices enabled. I think Quicksilver is probably doing it that way.

Resources