How to reparent a Cocoa window? - cocoa

I need to host my window upon a window of another application.
How to enumerate windows of another Cocoa application? Is it possible to control
them?
If no: how can I draw upon a window of another application?
Thanks!

How to enumerate windows of another Cocoa application?
You can enumerate the windows of another application using the Accessibility API. It doesn't matter whether that application is Cocoa or not.
Is it possible to control them?
Here, it does matter, indirectly, whether the application is Cocoa (or Carbon using standard controls). More precisely, it matters whether the application is accessible.
It usually is possible to move another window, resize it, or do simple things with controls in it (such as press buttons).
It is not possible to tape one of your windows to a window in another application. You will have to observe its location and move your window when the other moves. Following a live drag this way is not possible.
If no: how can I draw upon a window of another application?
You can't. You can only draw in your own windows.
You can make a transparent overlay window and draw on that, but that gets you back to the problem of taping one of your windows to a window in another application.
You should probably ask a broader question about whatever it is you hope to achieve by taping one of your windows to a window in another application or by drawing into a window in another application.

Checkout CGWindow.h
CGWindowListCreateDescriptionFromArray() is probably what you're looking for.

Cocoa does not support reparenting of another application window, or drawing on it.
But, there're two ways to get windows attributes for all the applications, like position, size, z-order, etc.
Accessibility API (also allows to control a foreign application window: move, resize, press buttons on it, etc.). If a foreign application does not support Accessibility API, then...
Quartz Window Services and a sample code for them, called "Son of Grab".

Related

Options for getting/setting Windows window states for a browser window

Our agency has a Java web application that uses a third party Java applet as a TIFF viewer. In this application javascript spawns a separate window (and yes, it must be a separate window). We are currently using an in-house ActiveX control so that the HTML window can have the window state, window size and position, and screen device presisted and restored via localStorage. Our agency is IE11 only so for now using it is not a problem. It's embedded in the page and operates on the "containing window".
This control was made many years ago in VB6. It appears that Visual Studio doesn't support the creation of ActiveX controls, and I don't personally have the knowledge or skill to create one another way.
Please do not suggest the HTML DOM window and screen objects! Most answers I've searched for just refer to these! They can't get the entire browser window size (viewport + chrome + toolbars + titlebar), nor does javascript have any idea about Windows-specific things like window state and screen device (it's always the OS primary screen). I understand the HTML DOM can't be tied to anything OS-specific in order for it to be implementable on multiple platforms.
Up until now this control has worked pretty well - you can, for example, maximize on a given screen and restore the window to the same screen. But I realize that ActiveX won't be supported forever and it's IE only...It seems to me, though, that the need for such functionality can't be totally uncommon. I'm guessing that some may suggest that the application UI should be changed so that this is less of a problem - but being able to put the viewer on a different monitor from the application, and having that window be restored exactly to it's last position is important to us. I do understand that if you reuse that window the user only has to position it once, but we'd rather not require this of the user.
Question: are there any alternatives to interacting with the win32 API other than COM/ActiveX ? Are there any other methods of accurate window persistence that can know about screens other than the OS primary, and maximized/minimized/normal windows) that use something other than the Win32 API?

Using CGDisplayStream to detect window movement

I want to detect when a window is being moved in real time and figured that CGDisplayStreamCreate etc. should provide just that. But I'm having difficulty deciding which window is being moved when my CGDisplayStreamFrameAvailableHandler is called. Is there a direct way to match the updated rects with with an app and its windows?
CGDisplayStream cannot tell you which applications/windows are responsible for a given screen update. You might be able to use another API like Accessibility to determine window locations and then guess which of the kCGDisplayStreamUpdateMovedRects corresponds to each window, but that will not be very reliable. If you're going to go the route of Accessibility, you may as well use Accessibility notifications for window move events: How can my app detect a change to another app's window?.
If you also need the pixel contents of the windows when they are moving, then you'll need to do some unfortunate time alignment between CGDisplayStream and Accessibility callbacks.

How come some controls don't have a windows handle?

I want to get the window handle of some controls to do some stuff with it (requiring a handle). The controls are in a different application.
Strangely enough; I found out that many controls don't have a windows handle, like the buttons in the toolbar (?) in Windows Explorer. Just try to get a handle to the Folder/Search/(etc) buttons. It just gives me 0.
So.. first question: how come that some controls have no windows handle? Aren't all controls windows, in their hearts? (Just talking about standard controls, like I would expect them in Windows Explorer, nothing customdrawn on a pane or the like.)
Which brings me to my second question: how to work with them (like using EnableWindow) if you cannot get their handle?
Many thanks for any inputs!
EDIT (ADDITIONAL INFORMATION):
Windows Explorer is just an example. I have the problem frequently - and in a different application (the one I am really interested in, a proprietary one). I have "physical" controls (since I can get an AutomationElement of those controls), but they have no windows handle. Also, I am trying to send a message (SendMessage) to get the button state, trying to find out whether it is pushed or not (it is a standard button that seems to exhibit that behaviour only through that message - at least as far as I have seen. Also, the pushed state can last a lot longer on that button than you would expect on a standard button, though the Windows Explorer buttons show a similar behaviour, acting like button-style checkboxes, though they are (push)buttons). SendMessage requires a window handle.
Does a ToolBar in some way change the behaviour of its child elements? Taking away their window handle or something similar? (Using parent handle/control id for identification??) But then how to use functions on those controls that require a windows handle?
If they don't have a handle, they're not real controls, they're just drawn to look like controls.
But of course, the toolbar buttons in Windows Explorer do have window handles, they're part of a toolbar. Use the toolbar manipulation functions to interact with them, not EnableWindow.
Or, better yet, use the documented APIs for things like search. Reverse-engineering Windows Explorer has never ended well for anyone, least of all the poor Windows Shell team, saddled with years of backwards-compatibility hacks for certain developers who thought that APIs are for everyone else. Whatever you do manage to get to work is very likely to break on the next version of Windows.
The controls you are talking about are using the ToolbarWindow32 class. If you want to interact with them then you'll need to use the toolbar control APIs/message. For example for enabling buttons you'd want to use TB_ENABLEBUTTON.
You can implement the controls yourself using GDI, OpenGL or DirectX. Try Window Detective on Mozilla Firefox and you will see that there is only one window. Controls in dialog boxes are not windows known to Windows.

Does Windows 7 treat full-screen applications differently?

I have a hidden process that waits for non-standard hardware button messages and runs an application (with CreateProcess). No problem with the user disturbing, it's an action that the user approved himself. Everything is fine when it's usual layout with taskbar shown and multiply captioned and non captioned- windows. But the situation is different in XP and 7, when the current application is full-screen. Full-screen application in this case is window without borders having exactly the same dimension as the screen. Windows hides taskbar for such application even if it's always on.
In Xp, it's ok, the taskbar is being shown in this case and appication (for example calculator) also, the full-screen app is still visible in areas other than the launched app's and taskbar'. But in Windows 7 nothing visual happens, the full-screen app is still on and if I switch to taskbar, the executed application is there. I tried to solve it with SetForegroundWindow, BringWindowToTop, even AllowSetForegroundWindow(GetCurrentProcessId()) call for a window handle found with CreateProcess-WaitForIntputIdle-EnumThreadWindows, no change. So did something change since XP related to full-screen windows that officially documented?
Thanks,
Max
I would imagine that, if you have your own hardware device, that there is some API for generating "real" user input. Clearly the legacy keyboard and mouse, and now USB HID drivers (many of which are usermode I think?) have access to an API to do so.
Synergy+ for example can generate fake keyboard and mouse events on connected PCs, and the consequence of the faked input is windows switching activation normally.
So, my initial idea is for your usermode "Device" application to synthesize actual keyboard messages - SendInput seems a likely candidate for "the API that can "fake" real user input events.
Then, use an API like RegisterHotKey in your "UI" app to respond to the hotkey combination your device app generates.
Now, (assuming that SendInput IS generating user input events at the correct level), you should (from within the WM_HOTKEY handler in your UI app) have permission (because everything was "user initiated") to change the foreground window (to yourself).
Vista introduced the desktop composition feature. In short, all windows are drawing to a memory bitmaps and the Desktop Window Manager is then composing these bitmaps and drawing on a full-screen Direct3D surface. Full-screen windows do not participate in the desktop composition and get to draw directly on the screen (mostly because the majority of full-screen apps are games that need real-time screen updates).
In particular, this means that when a full screen app is up and running, it is covering the DWM composed image and the user needs to switch to a DWM-managed window for the DWM to start drawing on top of the full-screen app.
I don't have a good solution for your problem, unfortunately. One way to solve it would be to add the WS_CAPTION style to your app and then handle WM_NCPAINT/WM_NCCALCSIZE/WM_NCHITTEST yourself. This would allow you to lie to the DWM that you are a regular windowed application, but change visually your NC area to look like you have no title. However, this does require certain amount of additional code and might be a bit more effort you want to invest.
Another way you can try to solve your problem is to explicitly minimize your full-screen application window when launching the new process. However, you will then have to solve the problem of when to maximize it back again.
Btw, you might find the comments on this post from Raymond Chen interesting.
Windows supports multiple desktops and my guess would be that the full screen up is using a different desktop than the default one (where your application will be shown). A desktop object in Windows is "a logical display surface and contains user interface objects such as windows, menus, and hooks". For example, screen savers normally are started on a separate desktop.
You can find out which desktop an application is running on using Process Explorer:
Set Process Explorer to replace Task Manager and to run always on top.
When your full screen up is shown, launch Process Explorer by pressing Ctrl + Shift + Esc
Within Process Explorer, select the full screen process and press Ctrl + H to display the handles of this process
See the value of the Desktop item in the list. Usually this would be set to Default
If you know what desktop this app is running on you can start your process on the same desktop by first calling OpenDesktop to get a handle to this desktop and then pass it into the STARTUPINFO of your CreateProcess call.

Mac style menus on Windows, system wide

I'm a Mac user and a Windows user (and once upon a time I used to be an Amiga user). I much prefer the menu-bar-at-the-top-of-the-screen approach that Mac (and Amiga) take (/took), and I'd like to write something for Windows that can provide this functionality (and work with existing applications).
I know this is a little ambitious, especially as it's just an itch-to-scratch type of a project and, thanks to a growing family, I have virtually zero free time. I looked in to this a few years a go and concluded that it was very difficult, but that was before StackOverflow ;)
I presume that I would need to do something like this to achieve the desired outcome:
Create application that will be the custom menu bar that sits on top of all other windows. The custom menus would have to provide all functionality to replace the standard Win32 in-window menus. That's OK, it's just an application that behaves like a menu bar.
It would continuously enumerate windows to find windows that are being created/destroyed. It would enumerate the child windows collection to find the menu bar.
It would build a menu that represents the menu options in the window.
It would hide the menu bar in the window and move all direct child windows up by a corresponding pixel amount. It would shorten the window height too.
It would capture all messages that an application sends to its menu, to adjust the custom menu accordingly.
It would constantly poll for the currently active window, so it can switch menus when necessary.
When a menu hit occurs, it would post a message to the window using the hwnd of the real menu child control.
That's it! Easy, eh? No, probably not.
I would really appreciate any advice from Win32 gurus about where to start, ideas, pitfalls, thoughts on if it's even possible. I'm not a Win32 C++ programmer by day, but I've done a bit in my time and I don't mind digging my way through the MSDN platform SDK docs...
(I also have another idea, to create a taskbar for each screen in a multi-monitor setup and show the active windows for the desktop -- but I think I can do that in managed code and save myself a lot of work).
The real difference between the Mac menu accross the top, and the Windows approach, is not just in the menu :- Its how the menu is used to crack open MDI apps.
In windows, MDI applications - like dev studio and office - have all their document windows hosted inside an application frame window. On the Mac, there are no per-application frame windows, all document windows share the desktop with all other document windows from other applications.
Lacking the ability to do a deep rework of traditional MDI apps to get their document windows out and onto the desktop, an attempt, however noble, to get a desktop menu, seems doomed to be a novelty with no real use or utility.
I am, all things considered, rather depressed by the current state of window managers on both Mac and Windows (and Linux): Things like tabbed paged in browsers are really acts of desperation by application developers who have not been given such things as part of the standard window manager - which is where I believe tabs really belong. Why should notepad++ have a set of tabs, and chrome, and firefox, and internet explorer (yes, I have been known to run all 4), along with dev studios docking view, various paint programs.
Its just a mess of different interpretations of what a modern multi document interface should look like.
The menu bar on a typical window is part of the non-client area of the window. It's drawn when the WndProc gets a WM_NCPAINT message and passes it on to DefWindowProc, which is part of User32.dll - the core window manager code.
Other things that are drawn in the same message? The caption, the window borders, the min/max/close boxes. These are all drawn while processing a single message. So in order to hide the menu for an application, you will have to take over handling of this message, which means changing the behavior of user32.dll. Hiding the menu is going to mean that you become responsible for drawing all of the non-client area.
And the appearance of all of these elements - The caption, the borders, etc. changes with every major version of Windows. So you have to chase that as well.
That's just one of about a dozen insurmountable problems with this idea. Even Microsoft probably couldn't pull this off and they have access to the source code of user32.dll!
It would be a far less difficult job to echo the menu for each application at the top of the screen, and even that is a nearly impossible job. When the menu pops there is lots of interaction with the application during which the menu can be (and often is) changed. It is very common for applications to change the state of menu items just before they are drawn. So you will have to replicate not only the appearance of the menus, but their entire message flow interaction with the application.
What you are trying to do is about a dozen impossible jobs all at once, If you try it, you will probably learn a lot, but you will never get it to work.

Resources