Is it possible to create a virtual IOHIDDevice from userspace? - macos

I have an HID device that is somewhat unfortunately designed (the Griffin Powermate) in that as you turn it, the input value for the "Rotation Axis" HID element doesn't change unless the speed of rotation dramatically changes or unless the direction changes. It sends many HID reports (angular resolution appears to be about 4deg, in that I get ~90 reports per revolution - not great, but whatever...), but they all report the same value (generally -1 or 1 for CCW and CW respectively -- if you turn faster, it will report -2 & 2, and so on, but you have to turn much faster. As a result of this unfortunate behavior, I'm finding this thing largely useless.
It occurred to me that I might be able to write a background userspace app that seized the physical device and presented another, virtual device with some minor additions so as to cause an input value change for every report (like a wrap-around accumulator, which the HID spec has support for -- God only knows why Griffin didn't do this themselves.)
But I'm not seeing how one would go about creating the kernel side object for the virtual device from userspace, and I'm starting to think it might not be possible. I saw this question, and its indications are not good, but it's low on details.
Alternately, if there's a way for me to spoof reports on the existing device, I suppose that would do it as well, since I could set it back to zero immediately after it reports -1 or 1.
Any ideas?

First of all, you can simulate input events via Quartz Event Services but this might not suffice for your purposes, as that's mainly designed for simulating keyboard and mouse events.
Second, the HID driver family of the IOKit framework contains a user client on the (global) IOHIDResource service, called IOHIDResourceDeviceUserClient. It appears that this can spawn IOHIDUserDevice instances on command from user space. In particular, the userspace IOKitLib contains a IOHIDUserDeviceCreate function which seems to be supposed to be able to do this. The HID family source code even comes with a little demo of this which creates a virtual keyboard of sorts. Unfortunately, although I can get this to build, it fails on the IOHIDUserDeviceCreate call. (I can see in IORegistryExplorer that the IOHIDResourceDeviceUserClient instance is never created.) I've not investigated this further due to lack of time, but it seems worth pursuing if you need its functionality.

Related

What type of driver should I write to simulate mouse input as if it was emitted by a real device?

For a testing purpose, I need to find a way to move the mouse pointer and fire click and scroll events as it does a real user with a real device (in sense of input origin, not data patterns).
Ideally, I want a driver that is able to receive instructions from a user-space app like "move the pointer to (x, y)" or "scroll down for 0.3s". If I understand correctly, this communication can be achieved via IOCtl.
I've read several articles from Microsoft, so I understand there're WDM and WDF, filter drivers and function drivers, Kernel-mode and User-mode, also there's HID whose reports look like something I could use. This field is so huge - I need advice on which path to take to solve my pretty simple problem (basically, move the cursor to a point).

How to help Windows or Windows Applications handle Composite Joysticks correctly?

The context for this question is primarily Windows 7, though I've tried it on 10 as well.
I've built a 4-player composite joystick using the Arduino Mega 2560. It is a USB device composed of 4 HID Joystick interface, each with its own endpoint. Each joystick accompanied with its buttons shows up correctly in Device Manager as a separate HID interface. They are identified by a VID/PID/MI_# triplet correctly, with MI_# being the interface index (MI_0, MI_1, etc). Calibration also sees each interface as separate, with inputs correctly corresponding to each controller in their enumerated order (ie: the first one receives inputs from only the joystick at index 0). When I dump the Descriptors, they also look correct.
There are two issues:
1) Naming
Windows only reads the interface string from the first interface. According to the Descriptor dump, each interface should have its own string, going from "Player 1" to "Player 4". Windows 7 sees them all as "Player 1". Inspecting regedit, this may because Windows 7 only has one OEM Name per joystick device, and so only gets the one for the first interface. Am I stuck with this behaviour, unless I somehow get a resolution from Microsoft?
For some reason, Windows 10 calls them all "Arduino Joystick". I'm not sure if because I'm using the same Test VID/PID combo I got from an Arduino Joystick tutorial and Windows is just picking up the name that someone else has used for their device, or if it is concatenating my Manufacturer String with the interface type "Joystick". I'm not sure why it would do the latter, but if it's the former I'd prefer to block that look-up somehow.
I'd like to resolve both, but practically speaking I'm using Windows 7 mostly.
2) Mixed Inputs
I've seen this behaviour only with some applications, but unfortunately one of them is Steam, and the others may be due to Unity. It manifests differently, so I'm lead to believe it's due to there being no standard way for dealing with composite joysticks.
On Steam, in Big Picture mode when I try to add/test a controller, while it detects all 4 controllers (all as Player 4, I might add), it only accepts the inputs from Joy4 no matter which of the controllers I choose. After saving the config however, all the joysticks have the same mappings applied. This is actually good, as I can use any controller to navigate Big Picture mode, but I'm concerned it's symptomatic of other problems which I might be seeing in other applications.
In "Race the Sun", when manually configuring joystick controls (it says Player 4 is detected), it will interpret inputs from a single joystick as coming from multiple joysticks. Usually, two of the four directional inputs come from Joy1, while the two other come from another Joystick other than the one being used. Eg: if I'm configuring Joy2, it'll register inputs from Joy1 and say Joy3.
In "Overcooked", it allows a single joystick to register as 4 different players. Normally you'd hit a particular button on the controller you want to use to register as a player, but in my case if you hit that button on joy1 4 times, then 4 players will be registered. If you start the game like this, you end-up controlling all 4 characters simultaneously with one joystick. Interesting, but not the intended usage, I'm sure.
Both "Race the Sun" and "Overcooked" are developed using Unity, and I understand that Unity's joystick management is rather lacking. Overcooked at least is designed to handle multiple players though (it's a couch co-op game), so this probably has more to do with the composite nature of my controllers.
I should note that other applications have no problems differentiating between the joysticks. Even xbox360ce sees them as separate, and the emulation works on several Steam games, single and multiplayer. Overcooked is still getting the joysticks crossed even though I'm using xbox360ce with it.
The question I'm bringing to Stack Overflow is what could I do to improve how applications handle my joysticks? Right now I'm using the generic Windows game controller driver. Theoretically this should have been enough, but issue #1 shows that composite joysticks may not be an expected use case. Would driver development even have a hope of resolving the issue with the applications I mentioned above, as I don't see how the device would differ significantly in its identification. I'm hoping someone more experienced with coding for USB devices can offer some insight.
For what it's worth, my Arduino sketch and firmware can be found here.
I have found a solution for "Race the Sun" and "Overcooked".
"Race the Sun" may have expected my joystick to provide axis ranges from 0 to a certain maximum (eg: 32767). I had the firmware set up going from -255 to +255. While other applications handle this fine, "Race the Sun" may expect the more common 0-X behaviour (this is the range that a Logitech joystick of mine provides). After changing this, I was able to configure and play it correctly. I've also updated my GitHub project; the link is in the original question.
The problem with "Overcooked" was actually caused by a badly configured or corrupted xbox360ce installation. Somewhere in tinkering with the emulator I must have screwed something up, as I messed up games that were previously working. I solved it by wiping all its files, including the content in ProgramData/X360CE, and re-downloading everything and redoing the controllers. Now all my games seem to be behaving correctly.
This still leaves the problem with Steam. For some reason Steam doesn't remember my joystick configuration from reboot to reboot. For the time-being I've decided just to put up with the default joystick behaviour, but would like to sort this one out eventually, too.

How can I introduce input lag (keyboard and mouse) to my system?

(I work in QA, this really is for legitimate use.)
I'm trying to come up with a way to introduce forced input lag for both keyboard and mouse (in Windows). Like, when I press 'A' on the keyboard, I want to introduce a very slight delay before the OS processes that A. Or if I move the mouse, I'd like the same mouse speed, but just with the same slight delay before it kicks in. This lag needs to be present across any threads, not just the one that kicked off the process. But, the lag doesn't have to be to-the-millisecond precise every time.
I'm not even sure how to go about setting this up. I'm capable of writing it in whatever language/environment we may need, I'm just not sure where to start. I think something like AutoHotkey may be able to do what I want by essentially making an arbitrary key call a macro that delays very slightly before sending that key, but I'm not sure what function calls I may need to make it happen. Or, maybe there's a way in C to get at the input across the OS before it kicks in. I'm just not sure.
Can anyone can point me to some resources or a language/function(s) that can accomplish this? (Or even an already existing program or service.)
If you want purely software solution, I’m afraid you’ll need to develop a filter driver for your keyboard and mouse. Very expensive to develop.
Instead, you can plug your mouse and keyboard into somewhere else, have the input messages come through the network, and then introduce network latency. You could use second PC + VNC software, or second PC + software USB/IP, or hardware USB/IP device like this one.
There’s an easy but less reliable way.
You could develop a system-wide WH_KEYBOARD_LL and WH_MOUSE_LL hooks, discard original messages, and after a while send the delayed messages with SendInput API. This should work mostly, however there’re cases where it wont, e.g. I don’t expect anything happens with most videogames because raw input.

How does an OS or a systems program wait for user input?

I come from the world of web programming and usually the server sets a superglobal variable through the specified method (get, post, etc) that makes available the data a user inputs into a field. Another way is to use AJAX to register a callback method to an event that the AJAX XMLhttpRequest object will initiate once notified by the browser (I'm assuming...). So I guess my question would be if there is some sort of dispatch interface that a systems programmer's code must interact with vicariously to execute in response to user input or does the programmer control the "waiting" process directly? And if there is a dispatch is there a loop structure in an OS that waits for a particular event to occur?
I was prompted to ask this question here because I'm in a basic programming logic class and the professor won't answer such a "sophisticated" question as this one. My book gives a vague pseudocode example like:
//start
sentinel_val = 'stop';
get user_input;
while (user_input not equal to sentinel_val)
{
// do something.
get user_input;
}
//stop
This example leads me to believe 1) that if no input is received from the user the loop will continue to repeat the sequence "do something" with the old or no input until the new input magically appears and then it will repeat again with that or a null value. It seems the book has tried to use the example of priming and reading from a file to convey how a program would get data from event driven input, no?
I'm confused :(
At the lowest level, input to the computer is asynchronous-- it happens via "interrupts", which is basically something external to the CPU (a keyboard controller) sending a signal to the CPU that says "stop what you're doing and accept this data". (It's complex, but this is the general idea). So the CPU stops, grabs the keystroke, and puts it in a buffer to be read, and then continues doing what it was doing before the interrupt.
Very similar things happen with inbound network traffic, and the results of reading from a disk, etc.
At a higher level, it gets more dependent on the operating system or framework that you're using.
With keyboard input, there might be a process (application, basically) that is blocked, waiting for user input. That "block" doesn't mean the computer just sits there waiting, it lets other processes run instead. But when the keyboard result comes in, it will wake up the one who was waiting for it.
From the point of view of that waiting process, they called some function "get_next_character()" and that function returned with the character. Etc.
Frankly, how all this stuff ties together is super interesting and useful to understand. :)
An OS is driven by hardware event (called interrupt). An OS does not wait for an interrupt, instead, it execute a special instruction to put the CPU a nap in a loop. If a hardware event occurs, the corresponding interrupt will be invoked.
It seems the book has tried to use the example of priming and reading from a file
to convey how a program would get data from event driven input, no?
Yes that is what the book is doing. In fact... the unix operating system is built on the idea of abstracting all input and output of any device to look like this.
In reality most operating systems and hardware make use of interrupts that jump to what we can call a sub-routine to perform the low level data read and then return control back to the operating system.
Also on most systems many of the devices work independent of the rest of the operating system and present a high level API to the operating system. For example a keyboard port (or maybe a better example is a network card) on a computer process interrupts itself and then the keyboard driver presents the operating system with a different api. You can look at standards for devices to see what these are. If you want to know the api the keyboard port presents for example you could look at the source code for the keyboard driver in a linix distro.
A basic explanation based on my understanding...
Your get user_input pseudo function is often something like readLine. That means that the function will block until the data read contains a new line character.
Below this the OS will use interrupts (this means it's not dealing with the keyboard unessesarily, but only when required) to allow it to respond when it the user hits some keys. The keyboard interrupt will cause execution to jump to a special routine which will fill an input buffer with data from the keyboard. The OS will then allow the appropriate process - generally the active one - to use readLine functions to access this data.
There's a bunch more complexity in there but that's a simple view. If someone offers a better explanation I'll willingly bow to superior knowledge.

Gui simulation for smart home application

im looking for suggestion in which GUI tool is most appropriate for me to use in implementing my study. im using java language. i would like the graphics to simulate a house in which graphical changes apply without user inputs from mouse or keyboards. my user input is in the form of sms. thanks in advance guys. im hoping to animate it or simulate a smart home through the conditions i had set in my program. thnaks!
Your questions is very underspecified. I will assume that you are at the early stages of producing a hand-rolled home automation programs, you probably need:
an environment to let you test the core logic of the system (i.e. "If the system is in state X and I issue command Y, what does it actually do, and will I lose the contents of my freezer?")
an environment to let you test the SMS communications module
you may need a demo mode to show prospecitve customers what it does (this is my best guess at what is being requested here)
Now (3) could fill in for (1), but is a lot more programming effort, so from the start you probably want a simple text interface to do (1).
In general, you almost certainly want a modular system: a core logic system supported by at least two input models (SMS and keyboard), three output models (text debug, graphical demo, and control-line/wireless signals for the actual hardware), and various ancillary stuff (configuration reading, saved state handling). Come to think of it, since you probably need a way to probe the current state of the system, you should make the saved state and condition probe code share a single framework as well.

Resources