How can I fire a key press or mouse click event without touching any input device at system level? - windows

How can I fire an automatic key press or mouse click event when a color appears on the screen
on other application or browser?

It depends a lot on what you want. Do you want to send the keys to
your Application
another fixed Application
Simulate a global keypress
Simulating keys globally
All of these will cause problems targeting a specific application and the active window changes.
SendKeys Sends Messages to the active app. It's a high level function taking a string which encodes a sequence of keys.
keybd_event is very low level and injects a global keypress. In most cases SendKeys is easier to use.
mouse_event simulates mouse input.
SendInput supersedes these functions. It's more flexible but a bit harder to use.
Sending to a specific window
When working with a fixed target window, sending it messages can work depending on how the window works. But since this doesn't update all states it might not always work. But you don't have a race condition with changing window focus, which is worth a lot.
WM_CHAR sends a character in the basic multilingual plane (16 bit)
WM_UNICHAR sends a character supporting the whole unicode range
WM_KEYDOWN and WM_KEYUP Sends keys which will be translated to characters by the keyboard layout.
My recommendation is when targeting a specific window/application try using messages first, and only if that fails try one of the lower level solutions.

when a color appears on the screen on other application or browser
I made one program using OpenCV and C++ for operating mouse with finger gesture. I used 3 color strips for 3 mouse function.
Yellow color for Left click
Blue color for Right click
Pink color for controlling cursor position
Whenever camera detect these colors, associated function takes place, I have used mouse_event for performing mouse function.
For more information you may read my code, blog, video.

I'm not 100% sure what you want, but if all you are after is running the method linked the the button.Clicked event, then you can manually run the method just like any other method.

You can use the .NET SendKeys class to send keystrokes.
Emulating mouse clicks requires P/Invoke.
I don't know how to detect colors on the screen.

Related

Cocoa listening key events and responding them without view

First of all hi guys!
I was trying to write a mouse controller app for mac os x which is reading inputs from keyboard and moves the mouse accordingly. By garbage input i will describe the input was intented for a mouse event but it creates text on screen.
Before anyone points to the fact that there is a built in one, It was laggy even in shortest lag setting and cannot registers more than two buttons at the same time (you have to press diagonals to go to the diagonal.) If you accidentally press another button when release of the accident button your motion stops. My first and last reaction was "rubbish!". Adding customization and extra features is my goal.
I want to create a key combination that will block the garbage input to be passed to other programs while it was held. But global monitoring and seems like it always passes the event. And unfortunately I see qqqqqqqwwwwwww like text in unwanted places.
I want to see that when i press q w and up, it will make the mouse go up. But i create qqqqqqqwwwwww mess on the way. My first idea was creating a view on popover and handle events there, but whenever I want to use my mouse from keyboard seeing a popover is anoying and I couldn't find a way to show the popover without leaving any garbage keyboard input.
What should i do in this situation?
You will want to use Quartz Event Taps. Note that for an application to tap keyboard events, it has to be trusted for accessibility (as in System Preferences > Security & Privacy > Privacy > Accessibility). Your app can ask to be made trusted using AXIsProcessTrustedWithOptions().

Multiple cursor on windows application

I have founds some resource like this dealing with the subject of attaining multiple cursors on windows for more than one mouse attached to the system. My requirement is a little simpler but I need some inputs on it.
1) What I want is to invoke an application (lets say IE) and do mouse activity(hovering , clicking etc) on it. All of this while there should be no disturbance to the system cursor , which should be free to be used by the user of the desktop.
2)I understand that this can not be done using the windows cursor apis as the documentation always mentions "the cursor" and there is no concept of multiple cursors inbuilt.
3)
This leaves me to drawing a cursor on the target window. A bitmap perhaps and to move it randomly? What APIs will be of use here?
How do I simulate the visual effects that are actually done my actual cursor movement. Do I need to send messages to the target
window like WM_MOUSEMOVE , WM_SETCURSOR etc.?
Does sending mouse messages to the other window interfere with the mouse activities that the user is involved in currently? The
intention is not to disturb the user while the application runs.
Thanks for your inputs.
Create a separate window for each extra mouse that is installed, and put an image of the desired cursor in each window. Then use the Raw Input API to monitor activity on the extra mouses and move your windows around as needed, and use mouse_event() or SendInput() to send out your own mouse events (movements, clicks, etc) as needed.

Detecting CGAssociateMouseAndMouseCursorPosition

We're making a user-space device driver for OS X that moves the cursor using Quartz Events, and we ran into a problem when games — especially ones that run in a windowed mode — can't properly capture the mouse pointer (= contain/keep it within the boundaries of their windows). For example, it would go outside the game window and click on the desktop or nearby inactive applications.
We could fix this if only we could detect when an active application calls CGAssociateMouseAndMouseCursorPosition.
How would you do this? Any ideas are appreciated.
I dont know if this can help you
There is an option called Focus Follows Mouse
Focus Follows Mouse - The Mouse pointer will grab automatically change focus to a new window inisde this one app if you mouse over it, instead of having to click a window to get focus, then clicking to do something.
http://wineskin.urgesoftware.com/tiki-index.php?page=Manual+4.6+Advanced+-+Options
I have written a few different mouse logical layers (for bridging different input devices, etc.). I have found that hooking into the OS level WM_INPUT event is a sure way of getting very real-time mouse position information. There is also a less rigorous solution of just polling the mouse data you need from one of Windows' very primitive DLLs. They are lightning fast. You could poll on a 10ms timer and never see performance loss on a modern machine.

Efficiently subclassing standard Cocoa controls

In spite of there being a Human Interface Guidelines document (HIG), a lot of high quality Mac desktop applications use custom controls. My question is what is the best approach to start subclassing controls for Cocoa development? It surprises me how little (good) information there is on this topic. What path is the best to follow so you don't end up with a nice but half broken control?
Here's a checklist:
Make sure your control works correctly at double resolution. Use Quartz Debug to test this. You'll want to test both drawing sanity (in all states—normal, selected, pressed, disabled, and any others) and operation sanity (that hit testing matches where things appear on the screen/other destination device).
For extra credit, make sure your control works correctly at 1.5 (or some other, similarly non-integral) resolution.
Test how the standard control works when clicked. You'll probably do this anyway. Do as the standard control does.
Test how the standard control works when half-clicked (mouse down inside, mouse up outside).
Test how the standard control works when dark-side-of-the-clicked (mouse down outside, mouse up inside).
Test how the standard control works when dragged within.
Test above four with the other mouse buttons (right and middle).
Test what the standard control does when you scroll with a scroll wheel. Also test shift + scroll and, on a mouse that has them (e.g., most Logitech mice), scroll left/right buttons.
Test what the standard control does when you two-finger scroll in each axis and in both axes.
Test what the standard control does when you pinch and when you unpinch.
Test what the standard control does when you swipe with three and four fingers in each axis.
Test how the standard control works with “Full Keyboard Access” turned on. Can you tab into it? Can you press it with the space bar? Can you enter it with the return key? Can you tab out of it?
Test how the standard control responds to Accessibility queries. Use Accessibility Inspector. See the Accessibility Programming Guidelines for Cocoa for information on responding to accessibility queries and messages in your control.
Test your app—including, but not limited to, your custom controls—in VoiceOver. Blindfold yourself and try to use the app with VoiceOver alone.
If applicable, test printing your view. You can print to Preview if you don't want to kill a tree for your development process.
Test printing in other paper sizes. If you're in the US, test A4; otherwise, test US Letter. Test still other paper sizes (such as Legal and A3) if you're feeling thorough.
If you're implementing a scroller (poor you), test that your scroller responds correctly to the “Jump to the (next page|spot that's clicked)” preference in the Appearance pane in System Preferences. “Correctly” means it should do what the user selected.
Make sure it correctly implements all four scroll-arrow-position settings: One at each end (Mac style), both at the lower/left end (NeXT style), both at the upper/right end, and both at each end (power user style). As always, you need to both draw correctly and hit-test/react correctly. (Suggested by #radiofreelunch/by David Dunham)
Also, if you're implementing a scroller, make sure it responds to the “Smooth scrolling” preference correctly.
Test that it responds to different scrolling speed preferences correctly.
If you're implementing a text entry field of some sort, or any view that responds to some sort of special hot key (like Enter to send a message in an inputline), test right-to-left (Hebrew/Arabic) text input and alternate input methods. The Character Viewer is a good start.
Also, test that you don't break ctrl-q. For example, ctrl-q, tab should always enter a tab character. The same typically goes for option + (key), such as option-return in an inputline.
Test that it responds to different key-repeat preferences correctly.
If you implement any custom keyboard shortcuts (⌘ + zero or more other modifiers + one or more character keys) by means other than Cocoa's standard menu shortcut handling, test your custom shortcut behavior under Dvorak. There is no faster way to sour our perceptions of your app than to respond to ⌘' by quitting.
Show your app to users who've never used it nor seen any mockups before. Disqualify programmers. If they don't recognize your control as a (whatever it's supposed to be), redesign it. If you ever say “the scroller is over here” or “you need to click that”, you instantly fail.
Test that your control responds (or doesn't respond, if responding would be dangerous) when your app is in the background. (Suggested by #chucker.)
Test that your control responds, but does not bring the app forward, when your app is in the background and the user clicks on it with the ⌘ key down. (Suggested by #chucker.)
Test resizing your view. Among other things, this will ensure that you set the autoresize mask correctly. You're also looking for drawing bugs—distorted elements, gaps, etc. (Part of this suggested by #Bagelturf.)
If your control is, in fact, a control, send it sizeToFit and make sure that it does the right thing. (Suggested by #Bagelturf.)
If you work with mouse coordinates, don't assume that they will be whole numbers. Ensure that you handle fractional numbers, zeroes (positive and negative), and negative numbers correctly. (Part of this suggested by #Bagelturf.)
You might also consider splitting your control into a control and a cell. In the latter case, also perform all of these tests on your cell embedded in an NSMatrix and in an NSTableColumn.
If your control has a menu, test what happens when the control is at one or more edges of the screen. The menu should move over to not fall outside screen space.
If your control has a menu, test that the user can enter it with the down arrow key when using “Full Keyboard Access”. If it is also a text field (like a combo box), test that this only happens when the user presses the down arrow at the end of the text; otherwise, normal text field behavior should rule: Pressing down on a line that is not the last line should move the cursor down a line, and pressing down on the last line should move to the end of the line.
If your control has a menu, test that it stays open when clicked and does not immediately close when held open. There is a function you can use to do this correctly, and it is available in 64-bit.
If your control has a menu, test that it is navigable (all four arrow keys + Home, End, Page Up, Page Down), usable (spacebar/return press action), and cancellable (esc) with the keyboard.
Hard to add anything to Peter's list, but if you're doing a scroll bar, be sure it handles all the deviant placements of the scroll arrows (like DoubleBoth).

How does a GUI Framework work?

I have been all over the web looking for an answer to this, and my question is this:
How does a GUI framework work? for instance how does Qt work, is there any books or wibsites on the topic of writing a GUI framework from scratch? and also does the framework have to call methods from the operating systems GUI framework?
-- Thank you to any one who takes the time to try to answer this question, and forgive me if i misspelled anything.
In the old days we did a lot of GUI programming from scratch. It is not as hard as it seems, but it requires a few weeks to come with results.
First you need a good drawing library. Minimal functionality for this library is drawing clipped rectangles (using patterns), lines, bitmaps, and fonts. You can cheat by creating fonts as bitmaps, and clipped rectangle is just a bunch of horizontal lines.
Now you need at least drivers for mouse, keyboard, and timer (if not already provided by the operating system). In general, you will need to detect keys, symbol keys (such as shift, etc.), mouse moves and mouse clicks. Basic timer functions will allow you to detect double clicks.
Then you need to create a window data structure. This data structure needs to have coordinates i.e. a rectangle, link to parent window (if not top window), and window function i.e. the function that will be called when this window should handle some events.
Once you can draw on screen you need some rectangle algebra functions. You need at least good function to calculate intersection of rectangles, and a quick resolution of relative to absolute coordinates. For example - if your child window has parent then its' x and y should recursively be added to parent x and y until you reach top window.
At this point you have your:
- primitive graphical functions,
- window structure,
- mouse driver, keyboard driver, and timer,
- rectangle arithmetic.
Now you can write your main event harvesting function. This function will run all the time. It's purpose will be to detect events and send messages to correct windows. What is an event? Well, when you start your program, store mouse x and y coordinates. Then in a loop check if they have changed. If they have changed, find the window at that position ... and send WM_MOUSEMOVE event to it. Your harvesting function should handle:
- mouse moves
- mouse clicks
- mouse double clicks (remember last click and position, measure time and decide if it is a double click or not)
- timer events
- keyboard buffer changes
...
Now you should be able to send events to windows. But you really need a mechanism for it. It is a combination of message queue, and window procedure. It usually works like this: each window has a window procedure which commonly accepts four arguments: message id (i.e. is it mouse move, is it paint message), window handle, parameter 1 and parameter 2. You can call this window procedure directly using something like a send_message functions. Or you can send this window a message via post_message function. This will put message to the queue and window will process messages one by one, eventually receiving this one. So why should you call one messages directly and put others to the queue? Because of priority. You see, a keyboard click can wait some time before being processed. But a window redraw must complete immediately to prevent flicker and wrong data on screen.
So your harvest_events function sends messages to windows using post_message, and send_message. And your window message pump gets them using typical message pump like this:
while (pmsg = get_message() != NULL) send_message(pmsg->id, pmsg->hwnd, pmsg->p1, pmsg->p2);
get_message simply obtains message from the queue, and calls send message. Simple, huh? Well, not quite so. This way you would only receive driver messages to windows, but you also need some functions to redraw windows, move them, etc.. When you create move_window function, resize_window, show_window, and hide_window function, your window coordinates will change. Parts of other windows will be uncovered (if top window is moved or closed).You need to calculate which windows are affected by coordinate changes and send paint message to those windows (to repaint only the parts that were uncovered - remember, you have clipping drawing functions so this will work).
These functions introduces messages msg_paint, msg_move, msg_resize, msg_hide...
Last, you need to maintain hierarchy of windows. Your top window should be the desktop. It should have child windows (application top windows). These windows may have further child windows (buttons, edit boxes, etc.) The obvious structure for holding these is the window tree. When you detect mouse click you have to traverse window tree and do it in a smart way (finding out who has focus, who is modal, etc.) to send message to the right window. And when you draw you also must traverse all children to see who is uncovered and who is not. Last but not least, you need to handle mouse rectangle as top window to prevent flickering the mouse as windows are re-drawn or (using timers and msg_paint events) animated.
That's roughly it.
A GUI framework like Qt generally works by taking the existing OS's primitive objects (windows, fonts, bitmaps, etc), wrapping them in more platform-neutral and less clunky classes/structures/handles, and giving you the functionality you'll need to manipulate them. Yes, that almost always involves using the OS's own functions, but it doesn't HAVE to -- if you're designing an API to draw an OpenGL UI, for example, most of the underlying OS's GUI stuff won't even work, and you'll be doing just about everything on your own.
Either way, it's not for the faint of heart. If you have to ask how a GUI framework works, you're not even close to ready to design one. You're better off sticking with an existing framework and extending it to do the spiffy stuff it doesn't do already.

Resources