Programmatically Press a Game Controller Button on Windows - windows

TLDR
Have a USB Game Controller, searching for a method in any Windows compatible language to press a button on it.
Detail
I have Windows gaming PC and a set of four Ultimarc Ultrasticks built into an arcade cabinet. Each Ultimarc Joystick appears as 16 button device on Windows, because of a shift key feature I have no interest in using. It would eat a button and slow response time. My layout uses 8 buttons, which is exactly how many inputs the joystick has.
However, I'm hopeful I can use the fact that those buttons exist for the device to add a virtual start button to each joystick.
I'm open to using any language, but looking for a method to programmatically press the 9th button on this joystick, which is available as far as windows knows, despite not having a physical input.
I'm familiar with key remappers that create a virtual game pad with their keys mapped to other physical devices, but those come with the complexity of having duplicate joysticks on the system, and don't seem necessary if I can just virtually press this one more button, since all other buttons and axes are mapped as I want.

Related

work-flow for multi-tasking - Mac(Applescript) ¿is possible?

I am a computer science student at a university and my workflow usually has many open windows and I would like to know if in Mac in addition to the multi desktop horizontal scrolling it is possible to do a kind of vertical scroling between windows of the same desktop so that the change of windows either by a gesture either of the keyboard or of the trackpad that allows changing from one window to another of those that are on the current desktop as if it were a "circular" queue.
https://github.com/diegoalfarog/WindowQueueMac

How does on-screen (virtual) keyboard works in Win10

I haven't find anything relevant in Google or any Microsoft site about it so I decided to ask a question here.
Everybody knows that in Win-based OS there is a virtual keyboard. I also know that *nix based OS, have it too. So, the question is about:
HOW DOES IT WORK INSIDE?
I mean, let's have an example that I opened on screen keyboard in Windows 10. What's the actual difference between:
input via hardware keyboard: when I'm using it, like I press X button
..and using a virtual keyboard, when I press the same button
Imagine, I have an admin access to terminal/computer, is there any option to track/distinguish that in the second time I pressed button not on hardware keyboard, but on-screen (by mouse clicking) version of it?
And there are also many different software, like AutoIt (yes, it's a language, but it's relevant to this example) that emulating pressing the X button. How does they work in Win-based OS? Do they "in-common" with default on-screen keyboard and using the same driver/WinAPI or there is a difference between them?
And the second case, between:
default on-screen keyboard
compilated AutoIt script
..any other software that emulating press X button
I guess the only way to find out "how exactly button was pressed" is to check current processes list via taskmgr and find out have anything been launched or not. Or I'm totally wrong here, and missing something?
THE SCOPE
I have written a node.js script which emulates button pressing behaviour in windows app.
TL:DR business logic short => open notepad.exe and type `Hello world`
And could someone give me any advice/recommend any powershell/bat script (or any other solution) with demonstration of Get­Async­Key­State check behavior? With which I could easily check my own node.js script (not by functional of it, but by triggering press the X button event)
I found an answer for node.js case here: Detecting Key Presses Across Applications in Powershell
SendInput is the preferred method to generate user input in software. The Windows on-screen keyboard probably uses it for everything except Ctrl+Alt+Delete which I believe has some kind of special handling. The on-screen keyboard is only able to generate Ctrl+Alt+Delete in certain configurations.
Software-generated input is merged with normal hardware input in the RIT (Raw Input Thread) in the kernel.
A low-level keyboard hook can detect software-generated input.

xamarin forms UWP tablet app windows 10 clicking on 2nd textbox closes keyboard

We are doing User acceptance testing on a Xamarin Forms UWP app targeting windows 10 tablet. We are finding that when the user clicks on a textbox to enter data the soft keyboard appears as expected. However, when the user then clicks into the next textbox to enter a second required piece of data, the soft keyboard hides/closes. This results in the user having to double hit the second textbox (and any more than might need to be filled). The first click hides the keyboard, the second then causes the keyboard to reappear. To say the least this is not a good user experience. I'm guessing maybe the focus of the second is firing before the lost focus of the first? Has anyone else observed this behavior and is there any easy fix? As may want to target both Android and Windows I'm hoping for a simple solution but maybe UWP just does has some problems?

Get Physical Keyboard Layout Programmatically

Is there a way to programmatically find out what kind of keyboard a computer has (i.e. where keys are located, and what extra keys are present in which locations)?
A little error is acceptable, if the keyboard is very non-standard, but in general, the point is to construct an on-screen keyboard-like application that can dynamically draw the keyboard layout on the screen, with high accuracy.
when connected to a computer, keyboards sends "scan codes" to the operating system. on windows, scan codes are then converted into virtual keys (a hardware independent mapping of the keyboard) then to real characters.
the MapVirtualKeyEx() function of the windows API allows you to translate between scan codes, virtual keys and characters. it should also be able to tell you if a key is non-existing.
together with GetKeyboardLayout() which tells you which keybaord is active at any point in time (keyboard layout can be different for different running applications), it should allow you to build a pretty accurate map of the keyboard.
anyway, have a look at the keyboard input section of the MSDN
i will add that all keyboards have almost the same layout. although there is no way to know where a key is physically located, you can probably guess from the scan code and basic knowledge of your own keyboards.
There is no mechanism by which a keyboard can tell Windows what its physical layout looks like. Easy to see with the Windows version of an on-screen keyboard, osk.exe. It seems to be able to guess the form-factor of the machine (laptop vs desktop) but on my laptop it doesn't match the layout of the keyboard.
Use the osk.exe layout as a template so nobody can complain that yours doesn't match well.

2 Mice, capturing exclusively one mouse on windows (DirectInput, DDK, Linux, anything)

I have connected 2 mice to PC and I wish one mouse to work as regular mouse and capture second mouse exclusively.
First I was trying DirectInput. It showed 2 devices with word mouse in InstanceName.
But only one device had DeviceType.Mouse and it was only really working device.
When I was acquiring it was blocking both mice.
Second I decided to create driver. I downloaded WinDDK. There is Mouse Filter driver sample.
I was able to compile it.
But I am not driver programmer. It is complex for me to understand but it has some code related to PS/2 mouse. All my mice are USB.
It should be installed with .INF file. I prefer it be dynamically loaded.
I am not quite sure that it is right direction.
I do it for my hobbyist robotic project. I wish PC take some information from mouse as sensor.
I think there must be existing similar projects or solutions.
I have linux pc on my table as well. May be better to attach mouse to linux and parse
/dev/input/mouse0
/dev/input/mouse1
/dev/input/mouse2
looks like
sudo cat /dev/input/mouse1 - gives some data but does not block device from clicks and movements.
I hope simple solution already exists
Cheers
Max
For Linux, you need to either declare the first mouse as the CorePointer or configure the second mouse to have SendCoreEvents false. See the xorg.conf(5) man page for more details.

Resources