How to Insert my Code Between Zoom Text and Windows? - windows

I am writing an application whose users will use ZoomText and a tablet PC. ZoomText is screen magnification software. However ZoomText has a bug that prevents tablet tracking from working correctly, meaning finger and pen interaction with the screen is incorrect. When you zoom in on a rectangle in the screen and tap on something, you are actually tapping on the absolute coordinates, as if ZoomText was not running.
I am trying to write a program that will correct this behavior. ZoomText has a COM API which allows me to know the zoom amount and location. This means that if I were able to get between ZoomText and the operating system, I could intercept the pen/touch input, translate the coordinates taking into account ZoomText's zoom and location, and then pass the input back to the operating system.
Where should I begin? I don't even know where to start looking for how to implement this.

I think the way to go about this is a low level mouse hook. Use SetWindowsHookEx with a hook type of WH_MOUSE_LL.

Related

Set Joystick position using psychtoolbox

I am running an experiment involving a joystick (cannot use a mouse for technical reasons), using psychtoolbox version 3, on a linux computer. At discrete time-points in the experiment, I need to place the joystick cursor at pre-determined points on the screen, so that the participant can move the cursor from there. However, psychtoolbox does not seem to have a command to do this. I can get this to work using a mouse, but not with a joystick. I tried the SetMouse command, but it does not seem to work for the joystick cursor. Does someone have any idea?

How to manually draw a mouse cursor on MacOS?

For a screen sharing collaborative app, it is desirable to give the remote party a mouse cursor also.
Is it possible to leverage NSCursor?
If not, I guess I must create a small accurately positioned top-level window displaying a bitmap (I've asked here if there is some way to retrieve the bitmaps from the system cursors).
Code/link contributions appreciated. If I figure it out first I will (as always) answer my own question.

Screenshot of the specific window (HWND, HW accelerated)

I need to capture a snapshots/screenshots of the specific window (HWND) that is using HW acceleration and record them to a video stream.
While using BitBlt or PrintWindow I'm able to capture image data only if this window is not HW accelerated, else I'm getting a black texture.
Tried using User32.dll's undocumented DwmGetDxSharedSurface to get the DirectX surface handle. But it fails with an error:
ERROR_GRAPHICS_PRESENT_REDIRECTION_DISABLED - desktop windowing
management subsystem is off
(Edit: Fails for certain applications, i.e. "calculator.exe")
Tried using Dwmapi.dll's undocumented functions DwmpDxUpdateWindowSharedSurface and DwmpDxGetWindowSharedSurface. I've managed to retrieve what looks like a valid DirectX surface handle. (it's d3dFormat, width and height information was valid) Dx's OpenSharedResource was not complaining and managed to create a valid ID3D11Texture2D. Problem is.. all bytes are zeros (getting a black texture). I might be doing something wrong here or.. undocumented DWM functionas does not work anymore on Windows 10...
Edit: I'm able to get image data for some applications like Windows
explorer, Paint, etc, but for some like i.e. Slack i get all
zeros/black image.
Edit: When capturing i.e. VLC, I get this:
Question:
Is there any other way to capture image data of the HW accelerated window?
Note: I don't want to capture the entire desktop.
You can use PrintWindow with nFlags=2
Or use Magnification API (exclude windows)
Or try to hack dwm.exe.

Is it possible to intercept the image data of Mac OSX Screen?

I would like to access the whole contents of a Mac OSX screen, not to take a screenshot, but modify the final (or final as possible) rendering of the screen.
Can anyone point me in the direction of any Cocoa / Quartz or other, API documentation on this? In a would like to access manipulate part of the OSX render pipeline, not for just one app but the whole screen.
Thanks
Ross
Edit: I have found CGGetDisplaysWithOpenGLDisplayMask. Wondering if I can use OpenGL shaders on the main screen.
You can't install a shader on the screen, as you can't get to the screen's underlying GL representation. You can, however, access the pixel data.
Take a look at CGDisplayCreateImageForRect(). In the same documentation set you'll find information about registering callback functions to find out when certain areas of the screen are being updated.

Mirroring a portion of the screen to an external display (in OSX)

I would like to write a program that can mirror a portion of the main display into a new window. Ideally this new window could then be displayed on an external monitor. I have seen this uiltity for a flightsim that does this on a pc (a multifunction display extractor).
CLick here for a screenshot of the program (MFD Extractor)
This would be a live window ie. constantaly updated video display not just a static graphic.
I have looked at screen magnifiers or vnc clients for ideas but I think I need to write something from scratch. I have tried to do some reading on osx programing but where do I start in terms of gaining access to the display? I somehow need to extract the graphics from a particular program. Is it best to go near the final output stage (the individual pixels sent to the display) or somewhere nearer the window management stage.
Any ideas or pointers would be much appreciated. I just need somewhere to start from.
Regards,
There are a few ways to do this:
Quartz Display Services will let you get access to the video memory for a screen.
Quartz Window Services (a.k.a. CGWindow) will let you create an image of everything that lies below a window. If you create a borderless, transparent, empty, high-level window whose frame occupies an entire screen, everything below it will be everything on that screen. (Of course, you could create a smaller window in order to copy a section of the screen.)
There's also a way to do it using OpenGL that I never fully understood. That technique is demonstrated by a couple of code samples, OpenGLScreenSnapshot and OpenGLCaptureToMovie. It's more or less obsoleted by CGWindow, though.
Each of those will get you an image that you can then show or write to a file or something.
To show an image, use NSImageView or IKImageView. If you want to magnify it, IKImageView has a zoomFactor property, but if you want nearest-neighbor scaling (like Pixie, DigitalColor Meter, or xScope), I think you'll need to write a custom view for that (but even that isn't all that hard).

Resources