Allow off-screen touches with TouchInjection windows 8+ api? - windows

I have a situation where I need to embed some 3rd party closed-source Unity applications into our own. I'm injecting a DLL which creates a DX11 shared texture from their swapchain. This part works and it's done.
Additionally I want to hide the form wrapping the Unity app (you can set their parent handle with a command line luckily) so I can have 100% control what happens to its texture in our own app (+ so it wouldn't interfere with the overall look of our own app). Which also works fine, I get the texture without a problem even when the Unity form is completely off-screen.
Now my problem is that this Unity application requires to be used with multitouch and after some fair amount of googling/stack overflow reading I kinda concluded that there's no way (or I haven't found any way) to compose valid WM_POINTER* messages just for one window in Windows. (this is kinda supported by the fact that you need to call a separate WinApi function to get all the data of a Pointer/Touch based on their ID which is received in the lParam of WM_POINTER* message)
So I'm using the TouchInjection Windows API (InitializeTouchInjection and InjectTouchInput) (information about these API's on the internet are misleading at their best but I solved actually all their quirks) and it works fine if the Unity form is visible on the screen. Or in other words if the touch position is inside the screen boundaries.
And now finally the problem: When I specify an offscreen coordinate for the injected touches, I get an ERROR_INVALID_PARAMETER (87 / 0x57) system error message. Otherwise it works. Is there a way to turn off this check in windows? Or anybody who solved this problem before some other way?
(Our app is not an end-user one, we have full control over the environment it runs inside, system-wide modifications are also OK.)
Thanks in advance!

You can't turn off error code checking because it's a return value inside the function, and represents a failure of the function call, then the function return or change nothing but error code. If the error code can be disabled, then what the status of the function call? succeed or fail?
You need to check coordinate manually and detect what to do.

Related

Why is moving a GUI window from the code discouraged?

Well, the title almost says it all : Why should I not move a GUI (e.g. Gtk) window on screen from the code ? In Gtk 3 there was an API for moving windows on screen, but it was removed in Gtk 4, because it is not good to move a window from code; only the user should do so (don't ask me to provide sources for that, I read it somewhere but have forgotten where and cannot find it). But I cannot think of any reason why it shouldn't be good, but of several reasons why it could be good, for example to restore the position of a window between application restarts. Could you please shed some light on this ?
The major reason why is that it can't possibly work cross-platform, so it is broken API by definition. That’s also why it was removed in GTK4. For example: this is impossible to implement when running on top of a Wayland session, since the protocol doesn't allow getting/setting global coordinates. If you still want to have something similar working, you'll have to call the specific platform API (for example, X11) for those platforms that you want to support.
On the reason why it’s not supported by some display protocols: it’s bad for UX and security. In terms of UX: some compositors can have special behavior because they need to work on a small device, or because they have a kiosk mode in which everything should always run fullscreen, or they provide a tiling experience. Applications positioning their windows themselves then tend to give unexpected behaviour. In terms of security: if you allow this, it’s technically possible for an application to reposition and resize itself so that it covers your screens while making itself transparent, without it being noticeable, which means it has the possibility of scraping all input.

StartScreenCapturebyWindowId() not excluding overlapping windows for certain programs (Agora Unity)

I am trying to setup individual window sharing for a project in Unity for Windows. The way I'm currently going about doing this is by using EnumWindows(), IsVisableWindow(), and GetWindowText() to create a dictionary of window titles and handles, then calling StartScreeCapturebyWindowId() to share the selected window.
This works relatively well for most process; the window of the process and only the window of the process is streamed. However, for certain programs (like Google Chrome, Discord, and Windows Photos) the captured area is set correctly, but overlapping windows are not culled out.
Does anyone know what could be causing this problem? Is there something wrong with the way I'm grabbing the handles for these windows? Or is there something about starting a screen capture that I am missing?
You certainly did the correct things. However, you also hit the limitation to the Windows part of the SDK. To understand this better, the set of programs are UWP applications. They have different ways to share the visible pixels. Previously version of Agora SDK could not even show the window. Starting from 3.0.1, the SDK uses Rectangle cutting method to get the window display. You may further read the online documentation about that API here.
There isn't much Agora can do for the near term. So you will just need to deal with the user experience (e.g. by warning them) or look at solutions like using Web SDK instead.

Nesting an application inside OS X subview

I'm looking for a way to embed another application into my own view.
The business reason is that the company has many small Electron apps (basically a small portable web program with a self-contained browser) that the company wants to embed inside an OS X program. These Electron apps would ideally integrate and display inside a subview seamlessly, so they look like little web frames inside our larger program.
I think programatically it would be easiest to open another program as a subview, but I'll take whatever I can get. Maybe even capturing it's NSWindow somehow. (Electron source is available so it is easily discoverable.) Maybe a way to dock the other program inside mine, or (getting more desperate) finding its view and sending commands to constrain it's size and location on top of mine.
So far all I've found says it is not really possible. I've found I can take the more desperate course. I can launch a process, find its view, and position it inside a spot on my display; when the window is moved or the content is scrolled send messages to move the other window. But that isn't really integrated, the menu stays separate, etc., but I cannot incorporate it.
Any ideas or helpful implementation details?
EDIT 1: Thanks for those responses. How about if we could have the electron apps expose their NSWindow somehow? Could that be leveraged? I'm thinking the application could send messages and (somehow, not sure exactly) to set the parent window inside this one. In Windows API it is much easier since you can call SetParent on anything, even items inside different processes. But Cocoa seems more difficult.
This isn't really a thing you can do in Mac OS X. Applications are not "composable" in the way you're hoping for - while it is possible to share a view with a subprocess under certain very specific circumstances (e.g, Safari or Chrome tab renderers), this requires the subapplication to be written in a very specific way to permit that. It's not something that would be feasible in the situation you're describing.
If you have access to the source of these Electron apps, consider combining them into a single overarching Electron application. Alternatively, if it's not possible for these applications to coexist within a single Electron app, you may want to consider using something like Chromium Embedded Framework to build your wrapper application; note, however, that this may require you to implement parts of the Electron framework yourself.
You cannot do that. Cocoa requires you to have only one NSApplication instance per UI app. So you will to fork/exec out new process and launch your applications.
If you can recompile the source code then you can create custom subclass of NSApplication and use that custom class in all the applications or you can create NSthread of other applications without NSApplication instance and go from there.

LibGDX on resume function

I have issues with OpenGL ES context loss under LibGDX, so I'm trying to figure out how to solve the problem. My first step was to actually re-initialize all my textures when the resume function is called in one my classes that extends Screen. Like this:
#Override
public void resume() {
Tile.initTiles();
}
The resume function re-creates all my tiles (including their textures), so I thought this would work. However, according to the documentation:
ApplicationListener Docs
The resume function should never be called on a desktop. Now, resume is never called on my Android phone, but on my desktop I tell the program to print "true" to the console in the resume method, and voila, the resume function is actually called on the desktop.
My main questions are:
Why would the resume function be called on my desktop but not on my Android phone?
How can I reload my textures on resume for my Android phone? Currently my textures are white on resuming the game after hitting the back key. Interestingly enough the textures reload fine when exiting through the home button.
I'll quickly explain how I do it, and for me it worked out so far without problems on both desktop and Android.
First of all I use an AssetManager for all my assets, including Texture. I usually load all my assets before entering the actual gameplay screen via a loading screen. But it should also work when you load them in your Screen.show() method.
Whenever I need an asset, the only way I retrieve them is via AssetManager.get(...). My AssetManager is actually a public static member of the base game class, so it can be accessed from anywhere in my code and there is only a single one of them.
In my Screen.resume() method I put an AssetManager.finishLoading(), though I'm not sure this is really necessary.
Right after the game starts and your AssetManager is instantiated, I call the static method Texture.setAssetManager(...).
This closes the circle. When on Android your OGL context is lost, LibGDX will actually revive it for you. Since you've set the AssetManager for your Textures, the manager will be able to find the textures after they have been reloaded. The AssetManager.finishLoading() will wait until the reloading has been finished. After that, everything should work as before the context loss.
Why Screen.resume() is not called, I cannot say. For me it is called on Android. Maybe you need to update your LibGDX version.

Ruby and Ubuntu's Notify-OSD

I'm using ruby-libnotify in a Ruby GTK app, and it works great to create a bubble popup in Ubuntu. I'm on Hardy, and it all works great. Then I had others try the app on Jaunty, and instead of a bubble popup with the new Notify-OSD system, as I expected, the notification turned into a dialog box.
I looked into it, and found the Ubuntu wiki states that the problem is because I set a timeout of 0:
Some programs specify an expire_timeout of 0 to produce notifications that never close by themselves, assuming that they can be closed manually as they can in notification-daemon. Because this is usually done for a message that requires response or acknowledgement, Notify OSD presents it as an alert box rather than as a bubble.
Is there a way I can use libnotify in some way to have a normal bubble with a "never expire" timeout? I would actually prefer it if I could use the old notification system, even, since Notify-OSD doesn't seem to support permanent bubbles at all.
It is unacceptable to have the dialog for me, as it doesn't stay over all windows, so the user won't see the popup right away, necessarily (which is the whole point of using the bubble popup).
It looks like you are just trying to use Notify-OSD for something it was not designed for. Notify-OSD bubbles are informational and transient, meaning that no critical information should be put in them as they are made to be ignorable.
According to the Ubuntu Design Guidelines, it looks like you are trying to make a morphing alert box, which should suit your needs nicely.
Sort of a sideways answer, but perhaps if the notification API doesn't quite map onto what you want to do you should look into using a more general library that allows you to draw your own on-screen bubbles. xosd comes to mind, though I remember it to be quite limited, but perhaps there are other options...
I remember using some command line tool to display notifications. You could just call it using system or ``.

Resources