Using PostMessage in Windows Mobile to simulate a menu pick - windows

I'm writing a routine to provide user definable keyboard short-cuts for any menu item in my Windows Mobile 5 application, which is in C++/MFC. To do this I am getting all of the available menu command IDs, and using the CWnd::PostMessage(WM_COMMAND,MyMenuID) to post it to the application. I use this technique to good effect elsewhere for inter-thread comms, but not with menu command IDs. Any ideas why this doesn't work. The app is document view, and I have tried posting to the CMainFrame and CView derived windows. I could write a god awful switch statement but I feel posting a message should work.
Edit: Ok, i've tried a number of things, including suggestions from this post, to no avail. Big ugly switch statement it is for now, I'll update again if i find anything better.

The only reason I can think of is the message is going to the wrong window. Don't forget that not all menu commands are always processed by a particular window. Some menu commands like Cut are usually processed by a view window. Others are processed by frame windows and some possibly by the application object.
Hope this helps.

Related

is there a winapi call or keyboard shortcut to enter windows console into "mark" mode?

Normally user is doing it by clicking right-mouse into console title bar then selecting "edit" and finally "mark". -> http://www.megaleecher.net/Copy_Paste_Text_Dos_Window
So is there a way of doing it from a console application either by sending a message/api call/keyboard sequence to its own window ?
If this is your own application and you want the richer behaviour and flexibility of a windows app rather than a console app, then use a windows app. Otherwise, you can try to automate the steps by simulating the input via SendInput. I would advise against doing this because it requires two steps (once for right-click, once to select 'Mark'). This means if someone clicks something else between these two events, your sequence will be broken. Furthermore you are really relying on the automation of an implementation detail which is prone to change at any point.
Looking through the Console Functions, it doesn't appear as though anything exists for setting the selection. The closest is going the other way with GetConsoleSelectionInfo.
If you want to process the information that is within a console application, a better alternative is to pipe it to your own process and deal with it there.
Found: PostMessage(GetConsoleWindow(), WM_COMMAND, 65522, 0);

Is it possible to call a function in a different, but currently executing process?

I have a friend who's working at a company that offers pretty poor support for its developers (scoring a 1/12 on the Joel Test).
Their build process is locked down pretty tight, and depending on the size of project it could take 40+(x2) mouse clicks to deploy. So I thought, "Hey, why not automate it the clicks using the win32api?" (Specifically using Python). I've got him a real nice tool that works just fine except for one issue - the tool that they use has a navigation pane that may or may not be open.
You can open and close it with a button press, but I'm not sure how I could make sure it was either open or closed. It's irrelevant to the build process - the only problem is that it alters where the mouse needs to click on the screen depending on its open status. The application is written in .NET and it exposes a function call that applications are able to use to toggle the panel, so I've been looking around for ideas and so far I've got two of them:
Attach to the process via a debugger and execute the function call somehow.
Take a screenshot at the location of the panels titlebar (which I've got through the win32 API and doesn't appear to change regardless if the panel is hidden or not).
Is there an easier way to figure out the state of this panel? The developers are given an admin account on their machine in addition to their regular account, so I can entertain ideas that require admin access, though I don't think that should be necessary?
UPDATE:
It looks like there's a button that can close the pane. In UIAVerify something shows up as "text" "Navigation" "btnClose". It says its AutomationId is btnClose but it's a ControlType.Text
What technology is this panel built from? Is it standard GDI or WPF? If its GDI, it should have a HWND. You should be able to find this HWND through either a class name or window title. Once you have the HWND, you can get its width.
If its built with WPF, er, I have no idea, but Snoop does this kind of thing, so I know its possible.

Show a popup menu in another application's window

How can a Delphi XE application show a popup menu inside another application's window? The idea is for a helper-type app, running in the background. On a registered hotkey the application needs to display a popup menu near the text caret or mouse cursor.
Applications that do that are common, here's a menu created by AutoHotkey and displayed in a text editor:
I guess what I'm asking is: how can I display a popup menu at an arbitrary screen location, without it being attached to a Delphi control?
Create a TPopupMenu with the appropriate menu items. When you need to show it simply call Popup passing the top left position in screen coordinates.
PopupMenu1.Popup(X, Y);
#DavidHeffernan answered your question, but you might not have asked the right question.
Let's take the example you gave: the user is running some arbirary application, and you want to be able to detect a hotkey, display a menu, and then take some action based on the menu item chosen (and maybe even the user's context, such as the word under the cursor). This is more complicated than simply displaying a menu at arbitrary screen coordinates.
My recommendation is to use AutoHotKey instead of trying to replicate this in some other programming language. In case you're not aware of this, it is possible for your code to run AutoHotKey scripts. IIRC, you can compile AHK scripts, so you wouldn't need to install AHK, just the compiled scripts. AHK may not be the most elegant of solutions, but it has depth and maturity.
If this is not possible, then I suggest you research Windows Hooks and DLL Injection. Unless you can find some preexisting code or framework, this will entail quite a bit of work.
The reason for this complexity? To augment another program smoothly (without running into problems with focus, etc.) you want to have your code run as part of that other program. The mechanics of this can be done via DLL injection. However, that's only the first step. Once your code is running in the right context, then your code has to inter-operate with the "host" program. This can be tricky (it helps if you have deep experience with Windows messaging and the Windows API). If you want this to work smoothly with any arbitrary program, it gets even harder.

How come some controls don't have a windows handle?

I want to get the window handle of some controls to do some stuff with it (requiring a handle). The controls are in a different application.
Strangely enough; I found out that many controls don't have a windows handle, like the buttons in the toolbar (?) in Windows Explorer. Just try to get a handle to the Folder/Search/(etc) buttons. It just gives me 0.
So.. first question: how come that some controls have no windows handle? Aren't all controls windows, in their hearts? (Just talking about standard controls, like I would expect them in Windows Explorer, nothing customdrawn on a pane or the like.)
Which brings me to my second question: how to work with them (like using EnableWindow) if you cannot get their handle?
Many thanks for any inputs!
EDIT (ADDITIONAL INFORMATION):
Windows Explorer is just an example. I have the problem frequently - and in a different application (the one I am really interested in, a proprietary one). I have "physical" controls (since I can get an AutomationElement of those controls), but they have no windows handle. Also, I am trying to send a message (SendMessage) to get the button state, trying to find out whether it is pushed or not (it is a standard button that seems to exhibit that behaviour only through that message - at least as far as I have seen. Also, the pushed state can last a lot longer on that button than you would expect on a standard button, though the Windows Explorer buttons show a similar behaviour, acting like button-style checkboxes, though they are (push)buttons). SendMessage requires a window handle.
Does a ToolBar in some way change the behaviour of its child elements? Taking away their window handle or something similar? (Using parent handle/control id for identification??) But then how to use functions on those controls that require a windows handle?
If they don't have a handle, they're not real controls, they're just drawn to look like controls.
But of course, the toolbar buttons in Windows Explorer do have window handles, they're part of a toolbar. Use the toolbar manipulation functions to interact with them, not EnableWindow.
Or, better yet, use the documented APIs for things like search. Reverse-engineering Windows Explorer has never ended well for anyone, least of all the poor Windows Shell team, saddled with years of backwards-compatibility hacks for certain developers who thought that APIs are for everyone else. Whatever you do manage to get to work is very likely to break on the next version of Windows.
The controls you are talking about are using the ToolbarWindow32 class. If you want to interact with them then you'll need to use the toolbar control APIs/message. For example for enabling buttons you'd want to use TB_ENABLEBUTTON.
You can implement the controls yourself using GDI, OpenGL or DirectX. Try Window Detective on Mozilla Firefox and you will see that there is only one window. Controls in dialog boxes are not windows known to Windows.

How can I post a Cocoa "sheet" on another program's window?

Using the Apple OS X Cocoa framework, how can I post a sheet (slide-down modal dialog) on the window of another process?
Edit: Clarified a bit:
My application is a Finder extension to do Subversion version control (http://scplugin.tigris.org/). Part of my application is a plug-in (a Contextual Menu Item for Finder); the bulk of my application, however, is in a separate daemon proces. For several reasons, we've chosen to put virtually all the code into the daemon; the plug-in only defines the menu itself, and Apple-Events over to the Daemon.
Sometimes, the daemon needs to prompt the user for further information. It can toss a window on-screen for this, but that's disruptive (randomly positioned), and it seems to me the work flow here is legitimately modal, for example "select a file, pick 'commit' from the menu, provide commit comments, do the operation."
Interprocess cooperation (such as passing a reference of some kind) is acceptable: both processes are mine, but I want to avoid binding the sheet's code into the primary process.
Really, it sounds like you're trying to have your inter-process communication happen at the view level, which isn't really how Cocoa generally works. Things will be much easier if you separate your layers a bit more than that.
Why don't you want to put the sheet code into the other process? It's view code, and view code is inherently process-specific. The right thing to do here is probably to add somewhat generic modal-sheet support to your plugin code, and an IPC call that your daemon can make to summon that code. Trying to ship view objects over to the remote process is going to be nightmarish if you can make it work at all.
You're fighting the frameworks with this approach.
You can't add a sheet to a window in another process, because you have at most only the most restricted access to the windows in the other process.
Please don't do this. Make the interaction nonmodal if at all possible. Especially in something like a commit, it's much nicer to be able to browse around your files while you're writing commit comments.
OS X does have window groups, but I don't think they can (easily) span applications.
Another thing to consider is that in OS X it's possible to have many Finder windows open on the same folder (unlike in OS 9). Even if you did have sufficient privileges/APIs to add a sheet to a Finder window, it's not like the modality of that window would prevent the user from being able to continue working with the files.
(My personal opinion as a long-time Mac user is that this kind of interaction would drive me right up the wall.)

Resources