How to Determine Text Cursor Position in Windows - winapi

What is the best way to determine the screen co-ordinates of the currently active text input cursor?
I need this for an in-line transliteration program so that I can display some suggestions options to the user as the text is entered.

First attach the thread input to the active application (AttachThreadInput). Then get the caret's position with GetCaretPos. The position is in client coordinates, call GetFocus to have the handle to the window that has the caret, then convert the coordinates to screen coordinates with ClientToScreen. Finally detach the thread input by calling again AttachThreadInput.

Related

How to track mouse movements without limiting it to screen size?

I'm using WM_MOUSEMOVE to get changes in mouse position. When simulating "knobs" for example it's desired to let the user go up/down with mouse without any limits. In this cases I hide cursor and use SetCursorPos to change its position every time user moves with it and detect just the difference from the original position.
Unfortunately it doesn't seem to work - if I set the mouse position, it sometimes works, but sometimes is one or more pixels away, which is just wrong. And even bigger trouble is that after the call another WM_MOUSEMOVE seems to be delivered, which unfortunately does the same thing as it wants to move the cursor back to the original position again. So it ends up in an infinite cycle or settings mouse position and receiving messages until the user releases the mouse button.
What's the correct approach or what's the problem?
The raw input system can do this - it lets you register for raw mouse input that isn't clipped or confined to the screen boundaries.
Broadly speaking, you register for raw input using RegisterRawInputDevices(). Your window will then receive WM_INPUT messages, which you process using the GetRawInputData() function.
See Using Raw Input for an example.
I hide cursor and use SetCursorPos to change its position every time user moves with it and detect just the difference from the original position.
This is just plain wrong. Instead, use SetCapture() to capture the mouse. All movements will be reported as WM_MOUSEMOVE messages with coordinates that are relative to the specified window, even if the mouse is outside of that window, until you release the capture.
Asking the user to move the mouse continuously, even after the cursor hit the screen limit is a very bad idea in terms of User Interface, IMHO.
Some games have another approach: when the mouse hit the "limit", the game enter a special mode: things appears to function exactly as if the mouse was moving, even if the user don't move it. When the user wants to exit that mode, he just has to move the mouse of the limit.
Doing so requires a timer, armed when the mouse hit some limit, executing code periodically as if the mouse was moving. The timer is stopped when a real mouse movement makes it leaves the limit.
Ok folks, so I found a solution simple enough:
The main problem is that SetCursorPos may not set the coordinates accurately, I guess it's because of some high resolution processing, nevertheless it's probably a bug. Anyway if SetCursorPos doesn't set the coordinates correctly (but +-1 in x and/or y) it also sends WM_MOUSEMOVE to the target window. As a result the window performs the exact same operation as before and this goes on and on.
So the solution is to remove all WM_MOUSEMOVE messages right after SetCursorPos:
MSG msg;
while (::PeekMessage(&msg, NULL, WM_MOUSEMOVE, WM_MOUSEMOVE, PM_REMOVE)) { };
Then retrieving the current mouse cursor pos using ::GetCursorPos .
It's ugly but seems to fix the problem. It basically seems that in some position of the mouse, the system always adds or subtracts 1 in either coordinate, so this way you let system do the weird stuff and use the new coordinates without trying to persuade system that your coordinates are the correct ones :).

Any way to trigger a callback function while hovering over a point in Matlab?

I am using a while loop and within that I add ginput in MATLAB to capture the positions of mouse. I check every time if the returned position is within some area so I will plot some curve on the current figure. But the problem is, by using ginput, I have to press enter before the positions are returned. Is that any way to capture the mouse event such that when the current cursor hover over some points, a callback function will be triggered? Thanks.
Since you already have a figure you're using, you could set the listening property for the figure:
set(gcf,'WindowButtonMotionFcn', #mouseMoveListener);
But now you have to create a function called 'mouseMoveListener' (if you want to name it something else, change the words after the # sign to whatever name you want, and make sure the actual event function is named that too).
Within your function mouseMoveListener you can now get the mouse coordinates:
MousePos = get(mainAxis,'CurrentPoint');
Which tells the current point of the mouse with respect to the axes coordinates. From there, you can have whatever if statement check that the position is where you want it and perform whatever tasks you want based on that information.

Programmatically find blink cursor position in windows c++?

How to find out blink cursor position in windows, from c++? In many cases I need send button click on the position of the blinking cursor, but I didn't find any important function which will take care of that.
OS win 7(64), c++
It is called "caret", cursor is the mouse pointer. You use GetCaretPos() to get its position. But the returned position is relative to the client area of the window that owns the caret. Which probably means that you need to find that window first, use GetForegroundWindow() for that. And don't send button click messages, they are posted so use PostMessage().
Avoid all of this by just using SendInput().
Note that UIPI (the user interface component of UAC) prevents you from poking stuff into a window owned by an elevated process.
GetGUIThreadInfo() is probably your best bet; pass it with idThread = 0 to get the info from the currently active thread, and then check the rcCaret member of the returned GUITHREADINFO structure. You'll then need to use ClientToScreen() with the hwndCaret value to convert client-relative coordinates to screen coordinates.
Note that this only works for apps that use the Win32 caret functions - specifically SetCaretPos(). If an app draws its own caret without using these, you may not get anything meaningful back. (Some apps, like Word, draw their own caret, but still call SetCaretPos so that accessibility aids that need to track the caret can use this technique.)
The rectangle you get back can sometimes be wider than the actual caret. When a bitmap is used for the caret, as is the case for Right-To-Left or Left-To-Right carets that have a little 'flag' attached to the top, you'll get back a rectangle that's a bit wider than the actual caret area, and may need to adjust or otherwise figure out where within this area the actual caret bar is - it may or may not be in the exact middle. Looks like for Notepad++ you should be fine, though.

Mouse state winapi

Is there any way to get mouse state (position, buttons states) using winapi in C++?
I don't want to use windows messages (WM_MOUSEMOVE, WM_LBUTTONDOWN, etc).
Thank you!
It sounds like you are looking for GetCursorInfo and GetKeyState. The latter you call with virtual key codes that specify the mouse button of interest.
If you only need cursor position, you can just use GetCursorPos(). Remember that both GetCursorInfo() and GetCursorPos() return screen coordinates. Use ScreenToClient() to convert to client area offsets.
Although the OP didn't want to use Windows Messages, I just wanted to mention something as a sidenote.
Something I found was that getting the cursor position as part of a message handler (for instance WM_SETCURSOR), most of the literature recommends using GetMessagePos() to retrieve the cursor's position at the time the message was sent. However, its the position before the mouse moved, not after. So the position returned 'lags' behind a pixel when trying to do mouseover detection over an area.

How do you get CreateWindowEx() to create the window on a specific monitor?

I've determined that I can use GetSystemMetrics(SM_CMONITORS) to query the number of attached monitors, but is there any way to control what monitor CreateWindowEx() uses for the window?
Yes, by the "x" and "y" arguments. Use EnumDisplayMonitors (pass two nulls) to enumerate the monitors. Your MonitorEnumProc callback gets a RECT* to the monitor's display rectangle. You'd get a negative RECT.right if a monitor is located at the left of your main one.
Each monitor simply displays some part of the desktop, so showing the window on a particular monitor is a matter of moving the window to the part of the desktop displayed by that monitor. When you call CreateWindowEx (or CreateWindow) you can specify x and y coordinates for the window, so displaying it on a particular monitor simply means specifying coordinates that fall within the area displayed by that monitor.
You can find the work areas for the monitors on a system with GetMonitorInfo.
The x and y parameters specify the location of the new window. This point can be anywhere on the virtual screen (all the monitor rectangles combined).
If you want to create the window on the same monitor as another window you can call MonitorFromWindow. Otherwise can enumerate all the monitors with EnumDisplayMonitors.
Either way, once you have a HMONITOR handle you must then call GetMonitorInfo. Your x and y parameters should be a value inside the bounds of the rcWork member in the monitor info struct. You would normally choose values that puts your window in the center of this rectangle.
It is important to use the workarea rectangle and not the full monitor rectangle because you don't want your window to appear underneath the Taskbar and other always-on-top appbars.

Resources