How can a Delphi XE application show a popup menu inside another application's window? The idea is for a helper-type app, running in the background. On a registered hotkey the application needs to display a popup menu near the text caret or mouse cursor.
Applications that do that are common, here's a menu created by AutoHotkey and displayed in a text editor:
I guess what I'm asking is: how can I display a popup menu at an arbitrary screen location, without it being attached to a Delphi control?
Create a TPopupMenu with the appropriate menu items. When you need to show it simply call Popup passing the top left position in screen coordinates.
PopupMenu1.Popup(X, Y);
#DavidHeffernan answered your question, but you might not have asked the right question.
Let's take the example you gave: the user is running some arbirary application, and you want to be able to detect a hotkey, display a menu, and then take some action based on the menu item chosen (and maybe even the user's context, such as the word under the cursor). This is more complicated than simply displaying a menu at arbitrary screen coordinates.
My recommendation is to use AutoHotKey instead of trying to replicate this in some other programming language. In case you're not aware of this, it is possible for your code to run AutoHotKey scripts. IIRC, you can compile AHK scripts, so you wouldn't need to install AHK, just the compiled scripts. AHK may not be the most elegant of solutions, but it has depth and maturity.
If this is not possible, then I suggest you research Windows Hooks and DLL Injection. Unless you can find some preexisting code or framework, this will entail quite a bit of work.
The reason for this complexity? To augment another program smoothly (without running into problems with focus, etc.) you want to have your code run as part of that other program. The mechanics of this can be done via DLL injection. However, that's only the first step. Once your code is running in the right context, then your code has to inter-operate with the "host" program. This can be tricky (it helps if you have deep experience with Windows messaging and the Windows API). If you want this to work smoothly with any arbitrary program, it gets even harder.
Related
I am not sure how to ask the question so here is a picture of some idea that came to mind
So for example, when you run my "custom launcher" it displays a window with a couple buttons on the side which you can assign values to. When you click on a button, the appropriate program will run in the big panel on the right (in window mode).
This is all from the user's perspective of course. They will just see that the program they want to run appears in that panel. The actual implementation may have nothing to do with "one program running inside another program"
My own use case is limited to windows desktop platforms only, but if it is possible to generalize it that would be nice as well.
Is this actually possible? Can I write such a program that will run another program inside a panel? The program that's launched may be someone else's, such as MS paint or calculator.
Just to expand on my comment above, here is an approach that may work for you: Fake it :)
When you launch the program, intercept all windows messages to the program that control it's position on screen. That way it 'appears' to be fixed in place, but in reality it's still attached to the normal Windows desktop.
Here's some light reading for you:
Windows Event Hooks
A hook is a mechanism by which an application can intercept events,
such as messages, mouse actions, and keystrokes. A function that
intercepts a particular type of event is known as a hook procedure. A
hook procedure can act on each event it receives, and then modify or
discard the event.
I would recommend against it in a commercial application because you are modifying the behavior of software you don't own - that software may make assumptions about what its parent window is, but for experimentation there's the SetParent Win32 function.
Is it usual to use a Dialog as main Windows? So without registering any user class via RegisterClassEx? Can I do everything I do via CreateWindow()? Why should I create controls such as buttons,editboxes etc via CreateWindow() instead of just making a Dialog and use it as main Window?
I'd also like to know main difference between a dialog and a windows and why use one the first instead of the second.
Thanks
Is it usual to use a Dialog as main Windows?
Yes, it is quite common.
So without registering any user class via RegisterClassEx?
A dialog is usually a predefined window class, so there usually no need for registering.
I'd also like to know main difference between a dialog and a windows and why use one the first instead of the second.
Well, two big differences would be that you cannot resize a dialog box and it has no minimize or maximize buttons (by default, but there are workarounds for this). Keep in mind the name, dialog box. In other words they are used for having a dialog with the user (receive input and displays messages to user). In a sense they are just like any other window, underneath CreateWindowxx, etc. is called, etc. However, they are somewhat predefined windows which can be made quickly and there are limitations to what you can do with them.
Also, a dialog uses a dialog procedure rather than a window procedure, which does some default processing for you, such as initializing some controls, etc.
Yes, an application can be dialog-based. There's even a Wizard for that if your'e using VisualStudio and MFC.
In VS2010, Create New Project > MFC Application. In "Application Type" select Dialog Based. Click through the rest of the Wizard, and you're off to the races.
Dialog-based applications are much simpler, architectually, than other designs such as Document/View. As such, simple things are much easier to "bang out" quickly, but the limitations of the design become apparent when you try to do more complex things. You could end up replicating much of the Doc/View architecture in your dialog-based app in order to build a production-quality Dialog-based application. In that case, did you really save yourself anything?
A dialog is a kind of window just as all of the various controls like buttons are really just windows. You can think of a dialog as being a kind of window with a lot of extra functionality to support the kinds of things that dialogs are used for.
There are two types of dialogs, modal which display and expect you to use them and then dismiss them, and non-modal which display but which do not capture and keep the input focus until they are dismissed. You can see these two types used in applications where a modal dialog is used to display an error or require the user to make some setting and a non-modal acts as a kind of tool box that stays displayed and when you need it, you click on it to do something and other times you are using some other window in the application.
Normally a dialog would not have a menu bar but would instead have all of its controls visible or easily accessible via tabs or some other type of presentation. Visual Studio and other IDEs have dialog designers to allow the placement of various controls along with wizards to allow the controls to be tied to classes and class members.
Which brings up a major difference between a dialog and a window. A window is kind of an empty page and to do things with the page requires more work. A dialog has tools that make the design easy however you are also constrained in large part by the toolbox.
If you have an application that is focused on basically allowing a user to specify certain settings and then do some task, a dialog works fairly well. If you have something that requires more complicated user interaction, an application window as the base from which all of your other dialogs and controls will be managed and manipulated will be more necessary.
I have a friend who's working at a company that offers pretty poor support for its developers (scoring a 1/12 on the Joel Test).
Their build process is locked down pretty tight, and depending on the size of project it could take 40+(x2) mouse clicks to deploy. So I thought, "Hey, why not automate it the clicks using the win32api?" (Specifically using Python). I've got him a real nice tool that works just fine except for one issue - the tool that they use has a navigation pane that may or may not be open.
You can open and close it with a button press, but I'm not sure how I could make sure it was either open or closed. It's irrelevant to the build process - the only problem is that it alters where the mouse needs to click on the screen depending on its open status. The application is written in .NET and it exposes a function call that applications are able to use to toggle the panel, so I've been looking around for ideas and so far I've got two of them:
Attach to the process via a debugger and execute the function call somehow.
Take a screenshot at the location of the panels titlebar (which I've got through the win32 API and doesn't appear to change regardless if the panel is hidden or not).
Is there an easier way to figure out the state of this panel? The developers are given an admin account on their machine in addition to their regular account, so I can entertain ideas that require admin access, though I don't think that should be necessary?
UPDATE:
It looks like there's a button that can close the pane. In UIAVerify something shows up as "text" "Navigation" "btnClose". It says its AutomationId is btnClose but it's a ControlType.Text
What technology is this panel built from? Is it standard GDI or WPF? If its GDI, it should have a HWND. You should be able to find this HWND through either a class name or window title. Once you have the HWND, you can get its width.
If its built with WPF, er, I have no idea, but Snoop does this kind of thing, so I know its possible.
I want to get the window handle of some controls to do some stuff with it (requiring a handle). The controls are in a different application.
Strangely enough; I found out that many controls don't have a windows handle, like the buttons in the toolbar (?) in Windows Explorer. Just try to get a handle to the Folder/Search/(etc) buttons. It just gives me 0.
So.. first question: how come that some controls have no windows handle? Aren't all controls windows, in their hearts? (Just talking about standard controls, like I would expect them in Windows Explorer, nothing customdrawn on a pane or the like.)
Which brings me to my second question: how to work with them (like using EnableWindow) if you cannot get their handle?
Many thanks for any inputs!
EDIT (ADDITIONAL INFORMATION):
Windows Explorer is just an example. I have the problem frequently - and in a different application (the one I am really interested in, a proprietary one). I have "physical" controls (since I can get an AutomationElement of those controls), but they have no windows handle. Also, I am trying to send a message (SendMessage) to get the button state, trying to find out whether it is pushed or not (it is a standard button that seems to exhibit that behaviour only through that message - at least as far as I have seen. Also, the pushed state can last a lot longer on that button than you would expect on a standard button, though the Windows Explorer buttons show a similar behaviour, acting like button-style checkboxes, though they are (push)buttons). SendMessage requires a window handle.
Does a ToolBar in some way change the behaviour of its child elements? Taking away their window handle or something similar? (Using parent handle/control id for identification??) But then how to use functions on those controls that require a windows handle?
If they don't have a handle, they're not real controls, they're just drawn to look like controls.
But of course, the toolbar buttons in Windows Explorer do have window handles, they're part of a toolbar. Use the toolbar manipulation functions to interact with them, not EnableWindow.
Or, better yet, use the documented APIs for things like search. Reverse-engineering Windows Explorer has never ended well for anyone, least of all the poor Windows Shell team, saddled with years of backwards-compatibility hacks for certain developers who thought that APIs are for everyone else. Whatever you do manage to get to work is very likely to break on the next version of Windows.
The controls you are talking about are using the ToolbarWindow32 class. If you want to interact with them then you'll need to use the toolbar control APIs/message. For example for enabling buttons you'd want to use TB_ENABLEBUTTON.
You can implement the controls yourself using GDI, OpenGL or DirectX. Try Window Detective on Mozilla Firefox and you will see that there is only one window. Controls in dialog boxes are not windows known to Windows.
I'm a Mac user and a Windows user (and once upon a time I used to be an Amiga user). I much prefer the menu-bar-at-the-top-of-the-screen approach that Mac (and Amiga) take (/took), and I'd like to write something for Windows that can provide this functionality (and work with existing applications).
I know this is a little ambitious, especially as it's just an itch-to-scratch type of a project and, thanks to a growing family, I have virtually zero free time. I looked in to this a few years a go and concluded that it was very difficult, but that was before StackOverflow ;)
I presume that I would need to do something like this to achieve the desired outcome:
Create application that will be the custom menu bar that sits on top of all other windows. The custom menus would have to provide all functionality to replace the standard Win32 in-window menus. That's OK, it's just an application that behaves like a menu bar.
It would continuously enumerate windows to find windows that are being created/destroyed. It would enumerate the child windows collection to find the menu bar.
It would build a menu that represents the menu options in the window.
It would hide the menu bar in the window and move all direct child windows up by a corresponding pixel amount. It would shorten the window height too.
It would capture all messages that an application sends to its menu, to adjust the custom menu accordingly.
It would constantly poll for the currently active window, so it can switch menus when necessary.
When a menu hit occurs, it would post a message to the window using the hwnd of the real menu child control.
That's it! Easy, eh? No, probably not.
I would really appreciate any advice from Win32 gurus about where to start, ideas, pitfalls, thoughts on if it's even possible. I'm not a Win32 C++ programmer by day, but I've done a bit in my time and I don't mind digging my way through the MSDN platform SDK docs...
(I also have another idea, to create a taskbar for each screen in a multi-monitor setup and show the active windows for the desktop -- but I think I can do that in managed code and save myself a lot of work).
The real difference between the Mac menu accross the top, and the Windows approach, is not just in the menu :- Its how the menu is used to crack open MDI apps.
In windows, MDI applications - like dev studio and office - have all their document windows hosted inside an application frame window. On the Mac, there are no per-application frame windows, all document windows share the desktop with all other document windows from other applications.
Lacking the ability to do a deep rework of traditional MDI apps to get their document windows out and onto the desktop, an attempt, however noble, to get a desktop menu, seems doomed to be a novelty with no real use or utility.
I am, all things considered, rather depressed by the current state of window managers on both Mac and Windows (and Linux): Things like tabbed paged in browsers are really acts of desperation by application developers who have not been given such things as part of the standard window manager - which is where I believe tabs really belong. Why should notepad++ have a set of tabs, and chrome, and firefox, and internet explorer (yes, I have been known to run all 4), along with dev studios docking view, various paint programs.
Its just a mess of different interpretations of what a modern multi document interface should look like.
The menu bar on a typical window is part of the non-client area of the window. It's drawn when the WndProc gets a WM_NCPAINT message and passes it on to DefWindowProc, which is part of User32.dll - the core window manager code.
Other things that are drawn in the same message? The caption, the window borders, the min/max/close boxes. These are all drawn while processing a single message. So in order to hide the menu for an application, you will have to take over handling of this message, which means changing the behavior of user32.dll. Hiding the menu is going to mean that you become responsible for drawing all of the non-client area.
And the appearance of all of these elements - The caption, the borders, etc. changes with every major version of Windows. So you have to chase that as well.
That's just one of about a dozen insurmountable problems with this idea. Even Microsoft probably couldn't pull this off and they have access to the source code of user32.dll!
It would be a far less difficult job to echo the menu for each application at the top of the screen, and even that is a nearly impossible job. When the menu pops there is lots of interaction with the application during which the menu can be (and often is) changed. It is very common for applications to change the state of menu items just before they are drawn. So you will have to replicate not only the appearance of the menus, but their entire message flow interaction with the application.
What you are trying to do is about a dozen impossible jobs all at once, If you try it, you will probably learn a lot, but you will never get it to work.