Are browser tabs child windows? - winapi

I've been recently dabbling with writing applications in win32 (i run win10). Windows has this concept of parent windows and child windows each of which can be customized to do various things.
What does this structure like in a browser like chrome/firefox? I read somewhere that browsers don't have a concept of "windows" in the win32 sense. However, Spy++ shows that each tab of a browser window has a unique window handle (likely for interfacing with windows). But the "tab handle" does not have any other children.
If we consider a webpage that is broken into multiple sections, each with its own scrollbar, how does the browser know which scrollbar is active? I imagine that the browser would likely use the mouse pointer location to figure this out once a scroll message (in the win32 sense) is sent to it.
Fundamentally, what kind of code structure do browsers use?
A related, albeit distant question- how is the scroll without focus functionality implemented in windows? I believe the answer to this leads from the first but if its not, I'll make this a separate question.
Thanks!
PS. I only know C++.
PPS. Links to resources that can get me started are just as important and much appreciated.

Related

Options for getting/setting Windows window states for a browser window

Our agency has a Java web application that uses a third party Java applet as a TIFF viewer. In this application javascript spawns a separate window (and yes, it must be a separate window). We are currently using an in-house ActiveX control so that the HTML window can have the window state, window size and position, and screen device presisted and restored via localStorage. Our agency is IE11 only so for now using it is not a problem. It's embedded in the page and operates on the "containing window".
This control was made many years ago in VB6. It appears that Visual Studio doesn't support the creation of ActiveX controls, and I don't personally have the knowledge or skill to create one another way.
Please do not suggest the HTML DOM window and screen objects! Most answers I've searched for just refer to these! They can't get the entire browser window size (viewport + chrome + toolbars + titlebar), nor does javascript have any idea about Windows-specific things like window state and screen device (it's always the OS primary screen). I understand the HTML DOM can't be tied to anything OS-specific in order for it to be implementable on multiple platforms.
Up until now this control has worked pretty well - you can, for example, maximize on a given screen and restore the window to the same screen. But I realize that ActiveX won't be supported forever and it's IE only...It seems to me, though, that the need for such functionality can't be totally uncommon. I'm guessing that some may suggest that the application UI should be changed so that this is less of a problem - but being able to put the viewer on a different monitor from the application, and having that window be restored exactly to it's last position is important to us. I do understand that if you reuse that window the user only has to position it once, but we'd rather not require this of the user.
Question: are there any alternatives to interacting with the win32 API other than COM/ActiveX ? Are there any other methods of accurate window persistence that can know about screens other than the OS primary, and maximized/minimized/normal windows) that use something other than the Win32 API?

How come some controls don't have a windows handle?

I want to get the window handle of some controls to do some stuff with it (requiring a handle). The controls are in a different application.
Strangely enough; I found out that many controls don't have a windows handle, like the buttons in the toolbar (?) in Windows Explorer. Just try to get a handle to the Folder/Search/(etc) buttons. It just gives me 0.
So.. first question: how come that some controls have no windows handle? Aren't all controls windows, in their hearts? (Just talking about standard controls, like I would expect them in Windows Explorer, nothing customdrawn on a pane or the like.)
Which brings me to my second question: how to work with them (like using EnableWindow) if you cannot get their handle?
Many thanks for any inputs!
EDIT (ADDITIONAL INFORMATION):
Windows Explorer is just an example. I have the problem frequently - and in a different application (the one I am really interested in, a proprietary one). I have "physical" controls (since I can get an AutomationElement of those controls), but they have no windows handle. Also, I am trying to send a message (SendMessage) to get the button state, trying to find out whether it is pushed or not (it is a standard button that seems to exhibit that behaviour only through that message - at least as far as I have seen. Also, the pushed state can last a lot longer on that button than you would expect on a standard button, though the Windows Explorer buttons show a similar behaviour, acting like button-style checkboxes, though they are (push)buttons). SendMessage requires a window handle.
Does a ToolBar in some way change the behaviour of its child elements? Taking away their window handle or something similar? (Using parent handle/control id for identification??) But then how to use functions on those controls that require a windows handle?
If they don't have a handle, they're not real controls, they're just drawn to look like controls.
But of course, the toolbar buttons in Windows Explorer do have window handles, they're part of a toolbar. Use the toolbar manipulation functions to interact with them, not EnableWindow.
Or, better yet, use the documented APIs for things like search. Reverse-engineering Windows Explorer has never ended well for anyone, least of all the poor Windows Shell team, saddled with years of backwards-compatibility hacks for certain developers who thought that APIs are for everyone else. Whatever you do manage to get to work is very likely to break on the next version of Windows.
The controls you are talking about are using the ToolbarWindow32 class. If you want to interact with them then you'll need to use the toolbar control APIs/message. For example for enabling buttons you'd want to use TB_ENABLEBUTTON.
You can implement the controls yourself using GDI, OpenGL or DirectX. Try Window Detective on Mozilla Firefox and you will see that there is only one window. Controls in dialog boxes are not windows known to Windows.

Is it possible to display "on screen" text on Windows OSes without an actual window?

This library does exactly what I am talking about on Linux systems: http://ichi2.net/pyosd/
My knowledge of Win32 API is limited but it seems to me that unless you create a window and enter the win32 main loop, you cannot do it. Some Googling also confirmed that.
Even so, are they newer GUI frameworks or technologies that would make it happen on windows?
Thanks
You don't need no stinkin' GUI frameworks. You can either:
Draw directly on the desktop. Of course, this is not generally considered a good idea, since it's mucking around with the internals of another application. Drawing this way is also quite fragile because your changes are erased each time the desktop repaints itself.
Create a transparent, layered window that you draw onto, which will appear over the desktop. If you specify that this window should be a top-level window, you could also have it appear over all of the other windows on the desktop.
There is absolutely nothing that forces windows to be rectangular gray-colored boxes, and since each window provides a device context that you can draw into, you can let your imagination run wild.

How can I find out or record the X11 top-level window from which a top-level window was opened?

I'm thinking of writing an X11 window manager which does for windows something like what TabKit does for tabs in Firefox (in its default tree view mode). To do this, I'd need to be to able to find out which window a window was opened from. Is there a standard way of finding this out?
(I've never done any X11 programming without using a cross-platform toolkit on top of X11, let alone writing a window manager.)
For the difficult cases - applications launching other applications, e.g. a word processor launching a web browser - there's going to be a need for co-operation between applications to track this information. The Zeitgeist project already seeks to track which documents were opened from which documents, which is close enough that I should probably work with Zeitgeist (and/or its KDE equivalent - Nepomuk?) to get this implemented.

Mac style menus on Windows, system wide

I'm a Mac user and a Windows user (and once upon a time I used to be an Amiga user). I much prefer the menu-bar-at-the-top-of-the-screen approach that Mac (and Amiga) take (/took), and I'd like to write something for Windows that can provide this functionality (and work with existing applications).
I know this is a little ambitious, especially as it's just an itch-to-scratch type of a project and, thanks to a growing family, I have virtually zero free time. I looked in to this a few years a go and concluded that it was very difficult, but that was before StackOverflow ;)
I presume that I would need to do something like this to achieve the desired outcome:
Create application that will be the custom menu bar that sits on top of all other windows. The custom menus would have to provide all functionality to replace the standard Win32 in-window menus. That's OK, it's just an application that behaves like a menu bar.
It would continuously enumerate windows to find windows that are being created/destroyed. It would enumerate the child windows collection to find the menu bar.
It would build a menu that represents the menu options in the window.
It would hide the menu bar in the window and move all direct child windows up by a corresponding pixel amount. It would shorten the window height too.
It would capture all messages that an application sends to its menu, to adjust the custom menu accordingly.
It would constantly poll for the currently active window, so it can switch menus when necessary.
When a menu hit occurs, it would post a message to the window using the hwnd of the real menu child control.
That's it! Easy, eh? No, probably not.
I would really appreciate any advice from Win32 gurus about where to start, ideas, pitfalls, thoughts on if it's even possible. I'm not a Win32 C++ programmer by day, but I've done a bit in my time and I don't mind digging my way through the MSDN platform SDK docs...
(I also have another idea, to create a taskbar for each screen in a multi-monitor setup and show the active windows for the desktop -- but I think I can do that in managed code and save myself a lot of work).
The real difference between the Mac menu accross the top, and the Windows approach, is not just in the menu :- Its how the menu is used to crack open MDI apps.
In windows, MDI applications - like dev studio and office - have all their document windows hosted inside an application frame window. On the Mac, there are no per-application frame windows, all document windows share the desktop with all other document windows from other applications.
Lacking the ability to do a deep rework of traditional MDI apps to get their document windows out and onto the desktop, an attempt, however noble, to get a desktop menu, seems doomed to be a novelty with no real use or utility.
I am, all things considered, rather depressed by the current state of window managers on both Mac and Windows (and Linux): Things like tabbed paged in browsers are really acts of desperation by application developers who have not been given such things as part of the standard window manager - which is where I believe tabs really belong. Why should notepad++ have a set of tabs, and chrome, and firefox, and internet explorer (yes, I have been known to run all 4), along with dev studios docking view, various paint programs.
Its just a mess of different interpretations of what a modern multi document interface should look like.
The menu bar on a typical window is part of the non-client area of the window. It's drawn when the WndProc gets a WM_NCPAINT message and passes it on to DefWindowProc, which is part of User32.dll - the core window manager code.
Other things that are drawn in the same message? The caption, the window borders, the min/max/close boxes. These are all drawn while processing a single message. So in order to hide the menu for an application, you will have to take over handling of this message, which means changing the behavior of user32.dll. Hiding the menu is going to mean that you become responsible for drawing all of the non-client area.
And the appearance of all of these elements - The caption, the borders, etc. changes with every major version of Windows. So you have to chase that as well.
That's just one of about a dozen insurmountable problems with this idea. Even Microsoft probably couldn't pull this off and they have access to the source code of user32.dll!
It would be a far less difficult job to echo the menu for each application at the top of the screen, and even that is a nearly impossible job. When the menu pops there is lots of interaction with the application during which the menu can be (and often is) changed. It is very common for applications to change the state of menu items just before they are drawn. So you will have to replicate not only the appearance of the menus, but their entire message flow interaction with the application.
What you are trying to do is about a dozen impossible jobs all at once, If you try it, you will probably learn a lot, but you will never get it to work.

Resources