Programmatically enable multitouch support? - windows

I am running windows7 on my laptop, and all is well, but I am jealous of the multitouch on macs. I don't really know how all of this "works", but i'm imagining that it couldn't be that hard to write a program to patch into windows that allows this.
Currently, if I put two fingers on the pad and drag around, it sortv half-heartedly tries to follow both. Or something. It's pitiful. After extended experimentation, I don't think it can really track both points at once. But perhaps I could detect the fumbling that occurs when I put my second finger down mathematically, and then "release" tracking on one of them.
Basically, I'm not trying for true multitouch (like stretching images), but I feel like I ought to be able to get something together that detects a double press as a right click. That's a step in the right direction.
What would I need to write that in? How would I install it?
If you're going to say it's impossible, then ignore that and take it from another direction. What if I wanted to enable a triple tap as a right click? (please no pre-built 3rd party solutions, I want to write this myself) That's certainly got to be possible.
How do I tap in to the resources I need?

This might be a hardware and or driver limitation. Not all touch surfaces (like trackpads) support multitouch.
You might want to check out Raw Input on msdn which supports alternative input methods.
Edit:
Note that the Raw Input API only provides access to multitouch if its supported by your hardware.

This is not possible, this is discussed on the www.insanelymac.com forum, where people create hackintosh pcs (basically pcs running mac osx). Several attempts have been made, but DUE to the construction and METHOD with which MANY touchpads collect sensory data it is impossible. Best of luck...

Related

How can I make ONE big desktop icon on windows?

I would like to make one desktop icon extra big, without affecting all the others. Why? Because I am trying to help out someone with a visual impairment. The reason is that I would like to make it do something special, like enabling/disabling Bluetooth or VPN. A natural alternative would be to have something with a small window, already running, but I was hoping to avoid having to program a new UI.
Is it possible to make a single desktop icon larger so that it would extend (lets say) 4 others?
(If not, what would be the better alternative solution?)
Windows doesn’t support this natively. You can make desktop icons larger or smaller - all of them, not just one, as you already know.
There might be a third-party program which does this, but I haven't discovered one.

Resize a Window (On Windows OS) while Rendering

All I want is to make a simple application that you can resize while rendering.
(IE resize while never once seeing the buffer out the edge of the screen)
Most commercial, professional, and major open source programs seem to be capable of this, while most all personal or hobbyist programs never seem to be capable of this. (I have no idea why)
I want to make a professional looking program like that.
A few examples of what I'm talking about:
https://gamedev.stackexchange.com/questions/127691/how-to-stop-sdl-from-freezing-the-rendering-while-resizing-the-window
https://www.gamedev.net/forums/topic/488074-win32-message-pump-and-opengl---rendering-pauses-while-draggingresizing/
https://en.sfml-dev.org/forums/index.php?topic=19388.5
What I have used in the past for windowing are:
SDL (Currently)
SFML
GLFW +OpenGL
And this problem applies to all 3 from what I can recall.
I would like to know the following:
If this is a problem that is solvable, please tell me why or why not
I've never once looked that low-level (OS APIs nor Graphics back-ends) so I just want to know why.
What's the way to solve this? Is it within my means?
Is the solution -really- a perfect solution? I've seen many people suggest solutions that have various problems
(IE minimizing the buffer but not getting rid of it entirely OR you got rid of it but there's a ton of flickering (I forgot why but it doesn't matter))
My current understanding is that this a Win32 API/Windows API related bug related to blocking.
I don't have any deeper understanding or knowledge on how to create my own solution easily, but if I must learn then I will.

MonoGame Platform Agnostic Input/Output

I've done a bit of XNA work, and I'm now trying to work in MonoGame. Previously, for all my input and output needs, I used Microsoft.Xna.Framework. I'm now trying to make one version of my game to deploy on as many platforms as possible (excluding, at the moment, touch interfaces), but I don't know what I should be doing regarding the mouse, for example.
Does MonoGame make Microsoft.Xna.Framework platform-agnostic or do I have to use other frameworks and switch between them depending on the platform?
MonoGame is designed to make it easy to port your game to other platforms, you shouldn't need to use any other frameworks to achieve that goal. However, it's not as simple as simply recompiling the code for each new platform.
For the most part all of your code will remain the same, but you'll need to put together a project for each platform and link all of the code files in each one. I won't go into detail about this, but I'll just say that you can do it and it's not that difficult.
Now, what you will find is that you may have to write some platform specific code to handle device specific stuff like screen scaling and input handling. What exactly you need to do will depend on your game, so I can't really explain that in detail either.
To make your life easier, it can be helpful to think about how your game is going to work on other platforms and write your code accordingly. For example, a touch on mobile device is very similar to the click of a mouse so you could wrap this functionality in a method of your own to minimize the code changes required when porting. On the other hand, some things you can do with a mouse simply don't work on touch interfaces, like right click, and hover. Similarly, touch interfaces have commonly used gestures that don't really map to a mouse on a PC like long press, swipe and pinch.
So the short answer is, you don't HAVE to do anything special, but you should at least think about it if you plan to port your game in the future.

Taking use of laptop custom buttons

I have a Lenovo Y550 laptop which has a nice looking touch sensitive strip with led lights on top of keyboard. The usage for this is however quite useless (it can be used to start 4 different Lenovo programs) so I started to think if I could program something of my own for it.
However I don't have any experience with this kind of thing.
First of all I'd like to know if it's even possible to use it on my own program in any way (capture touches or even control the lights).
Second, where should I start researching about this? I checked Windows Device Manager to see if I could spot anything helpful there, but no success. I can only see many kind of HID devices there.
One thing that is on my mind is to use some kind of hooks to take usage of this. Could that work? I don't really mind what language I'll have to use, learning new ones is a useful anyway.
If it is possible to totally control the touch sensitive strip it would be nice to light up the led lights as I will on it (now led light lights up where my finger is).
I did something similar for my Logitech keyboard. I had to hook the low level keyboard hook, and I modified the Logitech specific key codes into a Windows generic one. All in all a very simple, short and limited tool, without any configuration, but it did its work reliably and only used a few kb or memory.
Get a low level keyboard hook working, set a breakpoint and see what key codes you get. Don't know about the LEDs though.
Edit: Found it. It's been quite a while since I wrote this.

API for getting screen region changes?

I am writing a sort of screen-recording app for Windows and wish to know when and which regions of the screen/active window have changed.
Is there a Windows API I can hook to get notified of screen changes?
Or would I need to manually write something like this? :(
I always figured that Remote Desktop used some sort of API to detect what regions of the screen had changed and only sent back those images - this is exactly the behavior that I need.
I don't think there is an API in Windows that can tell you which parts of the screen have changed.
One possible way is using a video mirror driver like UltraVNC uses.
I think you'll find some clues here Screen Event Recorder DLL/Application, here About Hooks, and here Writing a Macro Recorder/Player using Win32 Journal Hooks
It would seem that you're going to have to do a fair bit of work to detect screen changes. This posting at tech-archive.net for instance. With this you can copy to RAM a reference screen and then take another and compare the two. It'd be up to you to define what kind of a change is a meaningful one. It's similar material to this article on desktop capture.
I think Remote Desktop streams GDI like commands. I don't know how they capture them in the first place.
Thanks for your help everyone. I ended up writing an image differencing class which seems to calculate the changed rectangles suprisingly quick. I've posted the gist of how it works here.
At the moment I'm just doing it in a timer but planning to do it after input events too.
Thanks heaps for your links Boost - I've only just looked at this thread again so I'll check them out soon.

Resources