Prevent program from using ClipCursor? - winapi

An old game (Pod) is kept alive with a glide wrapper and thus can now be run in custom resolutions larger than the native game resolution which was 640x480.
However, due to problems with the glide wrappers, if the game is run at 1920x1080 for example, the cursor is only allowed to move in a 0, 0, 640, 480 rectangle; obviously the WinAPI ClipCursor function has been used by the original developers for this.
This is pretty nasty because you can't act with the game menu mouse-wise in a useful way since not all buttons can be reached.
Is it possible to disable ClipCursor() functionality globally? Do I have to inject a DLL (I didn't do this completely before) or would it just be enough to let a C# app run in the background, watching for the game process and setting ClipCursor() to the real screen area after the process has been started?

I seriously doubt it's calling ClipCursor() more than once. Try writing a small program to call ClipCursor() and set it back to the size of your desktop. Run that program after your game is started.
edit
Depending on your skill level, you could also try using ollydbg to step through the program and find where it's calling the ClipCursor() API, and insert a jump to skip over it.

Related

How OBS record hidden window?

I'm making a program to record a window that is obscured by another window via python and WIN32API library.
Through many searches, I succeeded in capturing the hidden window through hwnd and BitBlt, but the execution time of my code is not stable.
I tried to provide the recording function by selecting 30~60 fps, but the time required to capture the hidden window and write() it to the video object of cv2 is irregular, so I can't make a 60fps video.
So I thought of OBS and Discord. In the case of OBS, it is possible to enforce stable recording for obscured windows. For Discord, there is a feature that allows you to select a specific window and share it with multiple people in real time (this can also be done for hidden windows).
I'd like to know how these programs provide stable video for occluded windows. I'm a student, and I'm not elite. I am asking this question because it is difficult to analyze the vast Github source code of OBS. Can someone give me an explanation of how the above program captures the screen?
Last time I checked, OBS was doing it with low-level hacks instead of APIs.
Specifically, they have wrote a DLL which they inject into the target application using CreateRemoteThread WinAPI. Then, they patch application’s code to intercept calls to IDXGISwapChain.Present method. Once a call is intercepted, the injected code has access to D3D frame buffer texture. It can copy that texture into another texture on GPU, and then do something with the copy. One possibility is DXGI surface sharing to pass the copy from the target application into the capturing process. The APIs for that don’t require both sides of the sharing to be in the same process, textures can be shared across processes just fine.
Unfortunately for you, their approach is borderline impossible to re-implement in higher-level languages like Python. Such things are only doable in C++ or similar low-level languages, and relatively hard to implement and debug.
#dy.kim, don't be afraid of large codebases. window-capture.c and the OBS GUI fairly obviously list "bitblt" and "Windows Graphics Capture" as the two methods it uses to capture windows, with the preference going to WGC if neither is specified.

AccessibleObjectFromPoint and per-monitor DPI

I'm using accessibility with the AccessibleObjectFromPoint function, and I'd like it to work correctly on a per-monitor DPI environment. Unfortunately, I can't get it to work. I tried many things, and the situation for now is:
My app is marked as per-monitor-DPI-aware in the manifest. (True/PM)
I use GetCursorPos and then AccessibleObjectFromPoint.
How can the problem be reproduced:
Have two monitors, one with 100% DPI, the other with 125%.
Run Chrome on the 125% monitor.
Use AccessibleObjectFromPoint on one of the tab names, it won't work.
It works with some apps (DPI-aware, it seems, like explorer), but doesn't work with others. I tried several relevant functions, such as GetPhysicalCursorPos and PhysicalToLogicalPointForPerMonitorDPI, but nothing works.
It's worth noting that Microsoft's inspect.exe works as expected.
I’ve been struggling with this exact same problem for several weeks and can now tell you my findings. Unfortunately I can’t give you more than a hint of code, because the project I am working on, is proprietary.
The issue started at Windows 8.1. The problem did not exist on Windows 7 or Vista, because AccessibleObjectFromPoint always used raw physical coordinates, as documented here: https://msdn.microsoft.com/en-us/library/windows/desktop/dd317984(v=vs.85).aspx .
“Microsoft Active Accessibility does not use logical coordinates. The following methods and functions either return physical coordinates or take them as parameters.” This has not been true since Windows 8.1.
AccessibleObjectFromPoint now uses a flawed calculation that cannot always find the correct window for reasons similar to my question here: High DPI scaling, mouse hooks and WindowFromPoint .
My findings lead me to one conclusion: The API is broken. This does not mean it is not possible though.
Possible solutions I have partially tested that seem to work follow.
Prerequisites are that you
1/. Make your process per monitor DPI aware, NOT USING THE MANIFEST (more on that later).
2/. Determine the hWnd of the window you want to query (WindowFromPoint() variants)
3/. Determine the monitor DPI of the queried hWnd
4/. Determine the DPI of your process
5/. Determine the DPI of the queried hWnd
6/. Determine the monitor origin and offset for the queried hWnd (MonitorFromWindow() and GetMonitorInfo() )
Next, depends on your platform
Windows 10.0.14393+
Write a function that finds the IAccessible (AccessibleObjectFromWindow() ) from the top level window, and then recursively call IAccessible::accHitTest until you reach the bottom-most IAccessible and perhaps ChildID data. Return that as if you would call AccessibleObjectFromPoint.
To call it successfully, you will need to scale the (x,y) co-ordinates into the scale system of the queried hWnd, using the DPIs and co-ordinates fetched in the list above. Watch out for systems where monitors are not the same size or if monitors are partially offset, or above and below.
And now for the important part for 10.0.14393 – Set your thread to the same DPI_AWARENESS_CONTEXT of the hWnd you are querying. Now call your new function. Now revert your thread to monitor DPI aware, and voila, it works, even if the window is not maximised. This is why you must not use the manifest.
If you are on Windows 8.1 to 10.0.10586 you have a tougher task.
Instead of calling accHitTest, as above, you have to recursively call AccessibleChildren and iterate the call IAccessible::accLocation to determine if your test point is within each child. This is tricky and starts to get really messy when you get to e.g. combo boxes in products like Office, which is only system DPI aware.
That’s all I can give you for now.
To do it successfully on multi-platform (mine has to work from Vista to Windows-Current) the only really safe bet is to write a wrapper DLL in C++ that can determine at runtime which OS it is on and change code path accordingly. The reason you want to do it in C++ is to avoid passing IAccessible objects across the .Net/unmanaged marshalling boundary. You can call IUnknown::Release on objects you don’t need to return n the unmanaged side. You can do it all in .Net, but it will be slow.
P.S. also watch out for Chrome returning infinite trees where parents are children of their parents, some snity checks are required. Also, Chrome does not return accRole correctly, and will give you HTML tags instead of VT_I4.
Good luck
A fairly workable solution is as follows, in your IAccessible recursive function:
Use getwindowrect to capture the physical right on main window
Use accChild.accLocation in loop to capture left and Width on each Object
Add this simple test
If l > rct2r.Right And l > arrIACC.x2 Then
arrIACC.x2 = l + w
End If
if dpi = 100 then no Object is furter out than physical right
if dpi > 100 then closebutton is...x pix offset
Use the difference to rescale all values you are in use of Width
arrIACC.w1 = CInt(((-rct2r.Left + arrIACC.w1) / arrIACC.x2) * rct2r.Right)
This solution is from an Excel plugin I have developed, I was testing the Width of the quick access toolbar qat and my result was +- 5 pixels regardless of any DPI.

Calling functions only after drawing is completed

I am making a drawing on NSView using a timer that is set to update every .02 seconds. On update some physical simulation makes a step, and then Canvas!.needsDisplay = true. It works when app is in foreground (usually), but when some lags happen, simulation progresses forward despite the fact that view hasn't reflected it yet. How do I pause the timer during these times to make simulation happen only when NSView can show it? I do not want to call step_over from inside drawRect, cause it seems like a bad idea, because then it would be harder to stop the simulation.
Generally this kind of update should be done the other way around, by letting the display ask you for frames as it can display them. This is done with a CADisplayLink CVDisplayLink (this is Mac; CADisplayLink is iOS). Configure it with a method you want to be called when a frame can be drawn.
Generally you do want your simulation to keep moving forward, even if it means dropping frames occasionally. For that, you check the timestamp and use that to work out what time to use for your new frame. But if you only want to move forward when the display can show it, then just update once per call.
Note that generating at 50fps is often going to mismatch the system that's trying to draw at 60fps, so you're going to wind up missing frames occasionally. That's one of several reasons not to try to push drawing with a timer.
See also Alternative of CADisplayLink for Mac OS X. Note that trying to draw at 50fps with Core Graphics usually isn't going to give good results in any case. The right tool here in OS X is Core Animation (or SpriteKit for games on 10.10, or OpenGL for more advanced high-speed rendering). You can do very basic animations with an NSTimer (and we did for years before Core Animation came along), but it's not really a tool for complex drawing.

WP7: IsolatedStorage vs. WriteableBitmap

I have a scenario that I need some good solid advice on. The question is really about speed of WriteableBitmap vs. images in IsolatedStorage on the Windows Phone.
I have an app that displays a UserControl (#1) which is a little graphically heavy. When the user swipes it, it transitions in a push-left type of transition to bring in a new UserControl (#2) which is also a little graphically heavy. If the user swipes the other way, control #1 is brought in in the same type of push-transition, this time from the right.
What I do today is take a snapshot of #1, load #2 off screen and take a snapshot of it, put both side-by-side in a Canvas control and animate that control either left or right. One of the reasons I don't just use the controls and animate them is they may have animation that starts when they are loaded - my current technique allows me to capture a screen shot of pre-animation and post-animation, depending on which direction they go in.
What I'm wondering, however, if it would be better/faster to just do the above the first time and send the writeablebitmap to IsolatedStorage with Extenstions.SaveJPEG and just use that instead in subsequent tranistion animations.
Would load/render/WriteableBitmap each time generally be faster or load jpeg from IsolatedStorage be faster each time? I see that the Transitions control in the SDK doesn't really do either of these, so I'm open to suggestions that are different that also might improve performance.
I expect this to be very depended on the hardware and application. So it is pretty hard to give an answer based on this input. It doesn't look to hard to test (on actual hardware and with the actual application) so my advice is to build both and test.
The applications I have been working with use both approaches and to be honest I haven't noticed much difference.
Also you might try and enable bitmap caching on the controls. This will give you a writeable bitmap implementation that is very fast.

Custom SLider for video on iPad

I have a custom UISlider and use the currentPlaybackTime to change values of an MPMoviePlayerController object.
The problem is when i scrub at a fast rate using the slider, it doesn't respond as fast as i would like..
Is there any better way to have a fast interactive scrubber for ipad? targeting from OS 3.2
Well there are two issues, only one you can control directly.
Multimedia-content is commonly compressed using some kind of delta-compression, hence quick and exact seeking is not a trivial task to cope with. As that is common and since you can not directly change that, you will have to live with it.
the only way to increase responsiveness for seeking on the content-side (when encoding) is reducing the gop-size - that is, less p-frames between the i-frames.
when using a slider or a similar control, you could, instead of directly connecting the current playback position with it, handle any manual changes in an indirect fashion. You could run a timer based job that, whenever the slider/scrubber has been moved, tries to adjust the playback position towards that new value. Once the player is seeking, prevent the scrubber from getting feedback from the current playback location but allow it once the player is in playing state again. That way the user does not directly experience the clunky seek feedback.

Resources