I have a windows application which has several sub-forms. i have to navigate through 5 or 6 forms to reach the form i need. this is time consuming since i have to open it several times through the day and i do it daily.
my need: i dont have the source project for this application, i got it as an executable program, but i need to create some application that does these steps for me automatically. In other words i need to find a way to automatically click the buttons that navigate through the forms and opens the form i need from step one.
is there any way i can do this ?
There is indeed, though generic solutions already exist to perform just this kind of function to arbitrary programs.
You can use Spy++ or a resource-editor, like ResHack or ResEdit to look at the program and get the control ids of the navigation buttons.
Once done, you can get a handle to the program itself and then send messages to it's WindowProcedure that would be generated if the user clicked the controls with a mouse,
Another alternative, is to get the position of the running target application, after you've got it's HWND, by using the GetWindowRect function. You could then use this position along with vert/horiz distances to generate mouse events.
The two have more-or-less the same result, though some applications won't work with approach #1.
In one instance, you need to use Spy++ to get the control IDs.
In the other instance, you need to use an image editor to get the pixel offsets of the controls.
In both instances, you'll need to use FindWindow, along with the window's title-text in order to get a HWND handle.
You could use a combination of the two - asking the program itself with GetDlgItem for the handle of the controls you need to click. You could then query the control for its position, before using mouse_event to position the mouse above it and again to click it.
Quite a few ways to skin this cat, actually.
Pre-existing solutions like AutoIt are said to be very easy to use and will be much easier than coding a new program for each target.
I have founds some resource like this dealing with the subject of attaining multiple cursors on windows for more than one mouse attached to the system. My requirement is a little simpler but I need some inputs on it.
1) What I want is to invoke an application (lets say IE) and do mouse activity(hovering , clicking etc) on it. All of this while there should be no disturbance to the system cursor , which should be free to be used by the user of the desktop.
2)I understand that this can not be done using the windows cursor apis as the documentation always mentions "the cursor" and there is no concept of multiple cursors inbuilt.
3)
This leaves me to drawing a cursor on the target window. A bitmap perhaps and to move it randomly? What APIs will be of use here?
How do I simulate the visual effects that are actually done my actual cursor movement. Do I need to send messages to the target
window like WM_MOUSEMOVE , WM_SETCURSOR etc.?
Does sending mouse messages to the other window interfere with the mouse activities that the user is involved in currently? The
intention is not to disturb the user while the application runs.
Thanks for your inputs.
Create a separate window for each extra mouse that is installed, and put an image of the desired cursor in each window. Then use the Raw Input API to monitor activity on the extra mouses and move your windows around as needed, and use mouse_event() or SendInput() to send out your own mouse events (movements, clicks, etc) as needed.
It is well known that the WinAPI offers a number of default cursors. Among these, there is for example the default arrow (IDC_ARROW) and a "stop" icon (IDC_NO).
When using drag'n'drop in the Windows Explorer (e.g. in Windows 7), you can notice that Windows shows a combination of the two aforementioned cursors when dragging a file to some place where it cannot be dropped. This cursor does not exist as a .cur file and it cannot be configured in Windows' mouse setup.
I didn't stumble upon any obvious function to "combine" cursors, so I'm wondering if there's a default way in the WinAPI to create a cursor like the one in the example, or if I have to toy around with the two cursors manually and blit them together into a new cursor on my own.
You can probably figure out why I am asking this question. Even if not, it's very simple.
My question is whether it is possible to detect the use of SetCursorPos() on one's own application, without scanning other running applications for any calls to this API.
For example, if I have my cursor in a window and I call SetCursorPos(), can this window in anyway know that the cursor placement is not directly from the mouse (raw input)?
I am not oblivious to the fact that you can 'know' whether a mouse input is raw simply by checking how the position alters; for example, if the position changes from 100(X) & 100(Y) to 500(X) and 500(Y), without moving through each individual location between these two, then with certainty, something has altered the mouse position.
If anyone of you know of a way to produce 'raw mouse input', without any application being able to tell the difference between the output from a function, and that from a mouse--if there is such a difference--then that'd suffice, too.
Of course, whenever I move my mouse, the operating system I am using detects this and then appropriately moves the cursor accordingly. In practice, I should be able to alter this low level functionality as to my will?
There is no way for a window to directly determine how the mouse was moved. External applications could be using SetCursorPos(), but they could also be using lower level functions like mouse_event() or SendInput() instead. By the time the notification reach the target window, the OS has already normalized the data and any source information is lost If you really needed to detect use of SetCursorPos() or other functions, you would have to directly hook into those functions in every running process. Alternatively, you might try registering for "Raw Input" via RegisterRawInputDevices() and see if you get a corresponding notification from the mouse hardware directy, assumine those simulating functions do not trigger Raw notifications as well.
I have been all over the web looking for an answer to this, and my question is this:
How does a GUI framework work? for instance how does Qt work, is there any books or wibsites on the topic of writing a GUI framework from scratch? and also does the framework have to call methods from the operating systems GUI framework?
-- Thank you to any one who takes the time to try to answer this question, and forgive me if i misspelled anything.
In the old days we did a lot of GUI programming from scratch. It is not as hard as it seems, but it requires a few weeks to come with results.
First you need a good drawing library. Minimal functionality for this library is drawing clipped rectangles (using patterns), lines, bitmaps, and fonts. You can cheat by creating fonts as bitmaps, and clipped rectangle is just a bunch of horizontal lines.
Now you need at least drivers for mouse, keyboard, and timer (if not already provided by the operating system). In general, you will need to detect keys, symbol keys (such as shift, etc.), mouse moves and mouse clicks. Basic timer functions will allow you to detect double clicks.
Then you need to create a window data structure. This data structure needs to have coordinates i.e. a rectangle, link to parent window (if not top window), and window function i.e. the function that will be called when this window should handle some events.
Once you can draw on screen you need some rectangle algebra functions. You need at least good function to calculate intersection of rectangles, and a quick resolution of relative to absolute coordinates. For example - if your child window has parent then its' x and y should recursively be added to parent x and y until you reach top window.
At this point you have your:
- primitive graphical functions,
- window structure,
- mouse driver, keyboard driver, and timer,
- rectangle arithmetic.
Now you can write your main event harvesting function. This function will run all the time. It's purpose will be to detect events and send messages to correct windows. What is an event? Well, when you start your program, store mouse x and y coordinates. Then in a loop check if they have changed. If they have changed, find the window at that position ... and send WM_MOUSEMOVE event to it. Your harvesting function should handle:
- mouse moves
- mouse clicks
- mouse double clicks (remember last click and position, measure time and decide if it is a double click or not)
- timer events
- keyboard buffer changes
...
Now you should be able to send events to windows. But you really need a mechanism for it. It is a combination of message queue, and window procedure. It usually works like this: each window has a window procedure which commonly accepts four arguments: message id (i.e. is it mouse move, is it paint message), window handle, parameter 1 and parameter 2. You can call this window procedure directly using something like a send_message functions. Or you can send this window a message via post_message function. This will put message to the queue and window will process messages one by one, eventually receiving this one. So why should you call one messages directly and put others to the queue? Because of priority. You see, a keyboard click can wait some time before being processed. But a window redraw must complete immediately to prevent flicker and wrong data on screen.
So your harvest_events function sends messages to windows using post_message, and send_message. And your window message pump gets them using typical message pump like this:
while (pmsg = get_message() != NULL) send_message(pmsg->id, pmsg->hwnd, pmsg->p1, pmsg->p2);
get_message simply obtains message from the queue, and calls send message. Simple, huh? Well, not quite so. This way you would only receive driver messages to windows, but you also need some functions to redraw windows, move them, etc.. When you create move_window function, resize_window, show_window, and hide_window function, your window coordinates will change. Parts of other windows will be uncovered (if top window is moved or closed).You need to calculate which windows are affected by coordinate changes and send paint message to those windows (to repaint only the parts that were uncovered - remember, you have clipping drawing functions so this will work).
These functions introduces messages msg_paint, msg_move, msg_resize, msg_hide...
Last, you need to maintain hierarchy of windows. Your top window should be the desktop. It should have child windows (application top windows). These windows may have further child windows (buttons, edit boxes, etc.) The obvious structure for holding these is the window tree. When you detect mouse click you have to traverse window tree and do it in a smart way (finding out who has focus, who is modal, etc.) to send message to the right window. And when you draw you also must traverse all children to see who is uncovered and who is not. Last but not least, you need to handle mouse rectangle as top window to prevent flickering the mouse as windows are re-drawn or (using timers and msg_paint events) animated.
That's roughly it.
A GUI framework like Qt generally works by taking the existing OS's primitive objects (windows, fonts, bitmaps, etc), wrapping them in more platform-neutral and less clunky classes/structures/handles, and giving you the functionality you'll need to manipulate them. Yes, that almost always involves using the OS's own functions, but it doesn't HAVE to -- if you're designing an API to draw an OpenGL UI, for example, most of the underlying OS's GUI stuff won't even work, and you'll be doing just about everything on your own.
Either way, it's not for the faint of heart. If you have to ask how a GUI framework works, you're not even close to ready to design one. You're better off sticking with an existing framework and extending it to do the spiffy stuff it doesn't do already.