I was wondering how osx interprets keyboard shortcuts vs multiple keys being pressed at the same time.
For example, I have the control + left setup to move spaces to the left. When I use my keyboard osx interprets it as a shortcut.
Using http://manytricks.com/keycodes/ it does not even register, the OS seems to short-circuit the command on seems to know that it corresponds to a keyboard shortcut.
When using an external usb footswitch that sends the control key signal and the keyboard to send the left key signal the os Does not interpret it as a shortcut but instead interprets this as a control + left key As seen in the photo below
I posted this on apple.stackexchange but was hoping for a more technical answer
https://apple.stackexchange.com/questions/140732/sending-controlleft-command-with-external-footswitch-delcom-only-picking-up-c
The goal is to get the footswitch to send a correct control signal (key codes says it is sending the exact same control signal as when I hit the left control key on my keyboard)
The footswitch works as expected under ubuntu
Thank you
oh boy, from kinesis website
Note: Modifier actions from one USB device cannot modify the input of a second USB device due to limitations designed into the Apple operating system. Example: Shift, Control, Command, or Option keystrokes programmed into the footswitch cannot modify the input of a separate USB keyboard or mouse. However, a key sequence like ‘Cmd-W’ or ‘Cmd-Shift-left arrow’ will work on a Macintosh if the entire sequence of keystrokes has been pre-programmed into the footswitch. (Footswitch can only be programmed on a Windows PC).
I just tried this using my usb keyboard and built in osx keyboard and it seems to be true. I can hit control + left on my usb keyboard and it works fine but control on my usb keyboard + left on the builtin keyboard does not work:(
Related
I'm looking for the Mac OS API that virtual machine or remote desktop type programs would use to "capture" the mouse and keyboard. That is to say, I want to write a GUI program where when the user clicks in my window, the normal mouse cursor disappears, and all keyboard and mouse input is diverted to my program, including global shortcuts like cmd-tab. What's the name of this API?
Found it: CGEventTapCreate can tap into the low level event stream to receive, filter, or insert HID events.
I own a keyboard that has an anti-ghosting mode.
It is toggled on/off using Fn+ScrollLock. When on, the codes sent are a bit different. The keyboard is still legitimate HID, but, for example, all modifier keys are seen by Linux as Shift and the actual modifier key is in another field of the event.
What I am looking for is either a ready driver (should someone happen to know about one) or some introduction to writing such input drivers. I do not know much about the ecosystem (evdev, libinput, etc.) and I do not even know where to start. If possible, the same driver should work both under X11 and under Wayland.
Just for the record, the keyboard's “shop” name is Modecom Volcano Gaming. The USB ID is258a:1006 and it is apparently not annotated in usb.ids. The keyboard works perfectly fine in both modes under Windows.
I haven't find anything relevant in Google or any Microsoft site about it so I decided to ask a question here.
Everybody knows that in Win-based OS there is a virtual keyboard. I also know that *nix based OS, have it too. So, the question is about:
HOW DOES IT WORK INSIDE?
I mean, let's have an example that I opened on screen keyboard in Windows 10. What's the actual difference between:
input via hardware keyboard: when I'm using it, like I press X button
..and using a virtual keyboard, when I press the same button
Imagine, I have an admin access to terminal/computer, is there any option to track/distinguish that in the second time I pressed button not on hardware keyboard, but on-screen (by mouse clicking) version of it?
And there are also many different software, like AutoIt (yes, it's a language, but it's relevant to this example) that emulating pressing the X button. How does they work in Win-based OS? Do they "in-common" with default on-screen keyboard and using the same driver/WinAPI or there is a difference between them?
And the second case, between:
default on-screen keyboard
compilated AutoIt script
..any other software that emulating press X button
I guess the only way to find out "how exactly button was pressed" is to check current processes list via taskmgr and find out have anything been launched or not. Or I'm totally wrong here, and missing something?
THE SCOPE
I have written a node.js script which emulates button pressing behaviour in windows app.
TL:DR business logic short => open notepad.exe and type `Hello world`
And could someone give me any advice/recommend any powershell/bat script (or any other solution) with demonstration of GetAsyncKeyState check behavior? With which I could easily check my own node.js script (not by functional of it, but by triggering press the X button event)
I found an answer for node.js case here: Detecting Key Presses Across Applications in Powershell
SendInput is the preferred method to generate user input in software. The Windows on-screen keyboard probably uses it for everything except Ctrl+Alt+Delete which I believe has some kind of special handling. The on-screen keyboard is only able to generate Ctrl+Alt+Delete in certain configurations.
Software-generated input is merged with normal hardware input in the RIT (Raw Input Thread) in the kernel.
A low-level keyboard hook can detect software-generated input.
Is there a way to programmatically find out what kind of keyboard a computer has (i.e. where keys are located, and what extra keys are present in which locations)?
A little error is acceptable, if the keyboard is very non-standard, but in general, the point is to construct an on-screen keyboard-like application that can dynamically draw the keyboard layout on the screen, with high accuracy.
when connected to a computer, keyboards sends "scan codes" to the operating system. on windows, scan codes are then converted into virtual keys (a hardware independent mapping of the keyboard) then to real characters.
the MapVirtualKeyEx() function of the windows API allows you to translate between scan codes, virtual keys and characters. it should also be able to tell you if a key is non-existing.
together with GetKeyboardLayout() which tells you which keybaord is active at any point in time (keyboard layout can be different for different running applications), it should allow you to build a pretty accurate map of the keyboard.
anyway, have a look at the keyboard input section of the MSDN
i will add that all keyboards have almost the same layout. although there is no way to know where a key is physically located, you can probably guess from the scan code and basic knowledge of your own keyboards.
There is no mechanism by which a keyboard can tell Windows what its physical layout looks like. Easy to see with the Windows version of an on-screen keyboard, osk.exe. It seems to be able to guess the form-factor of the machine (laptop vs desktop) but on my laptop it doesn't match the layout of the keyboard.
Use the osk.exe layout as a template so nobody can complain that yours doesn't match well.
I am writing an automation script that runs on an embedded linux target.
A part of the script involves running an app on the target and obtaining some data from the stdout. Stdout here is the ssh terminal connection I have to the target.
However, this data is available on the stdout only if certain keys are pressed and the key press has to be done on the keyboard connected to the embedded target and not on the host system from which I have ssh'd into the target. Is there any way to simulate this?
Edit:
Elaborating on what I need -
I have an OpenGL app that I run on the embedded linux (works like regular linux) target. This displays some graphics on the embedded system's display device. Pressing f on the keyboard connected to the target outputs the fps data onto the ssh terminal from which I control the target.
Since I am automating the process of running this OpenGL app and obtaining the fps scores, I can't expect a keyboard to be connected to the target let alone expect a user to input a keystroke on the embedded target keyboard. How do I go about this?
Edit 2:
Expect doesn't work since expect can issue strokes only to the ssh terminal. The keystroke I need to send to the app has to come from the keyboard connected to the target (this is the part that needs simulation without actually having a keyboard connected to it).
Thanks.
This is exactly the domain of Expect , which stackoverflow incidentally recognizes with its own tag.
The quickest way to achieve the OpenGL automation you're after, while learning as little of Expect as necessary, is likely by way of autoexpect.
I'm not at home right now (so no linux at hand), so can't actually try it out. But you should be able to emulate keystrokes if you echo the desired keypresses (possibly the keyboard codes) to the keyboard node located under /dev on your target. This could be done through your ssh.
A solution which would satisfy both QA and manufacturing test is to build a piece of hardware which looks like a keyboard to the embedded device and has external control. Depending on how complex your input needs to be this could be anything from a store-bought keyboard with the spacebar taped down to a microcontroller talking PS/2 or USB on the keyboard side and something else (serial, USB, ethernet) on the control side.
With the LUFA library it is remarkably easy to make a USB keyboard with AT90USB series parts. Some of them even have 2 USB ports and could be automated by USB connected to another system (or if you want to get cheeky you could have it enumerate both as a keyboard and the control device and loop the keyboard input through the embedded system).
How about echo "f" > /dev/console ?
How about create text file with inputs and run as follows?
cat inputs.txt | target_executable
Contents of inputs.txt can be something like follows,
y
y
y
n
y