Spoof the mouse coordinates when sending/posting a message (GetMessagePos) - winapi

I'm posting the WM_LBUTTONDOWN/WM_LBUTTONUP messages to simulate a mouse click on a specific location. That works most of the time, but there's one application which uses the GetMessagePos function to calculate the coordinates.
Is there any way to send/post a message with custom mouse coordinates?

Related

Intercepting the mouse coordinates and modifying it

I would like to create an app that intercepts the mouse coordinates and modify them.
I want to build an app that runs in the background and intercepts the mouse movement, filtering it, for all apps running on macOS.
Suppose the mouse is scrolling freely but if the user presses a global shortcut, like CMD ALT X, the vertical scrolling is filtered and just the horizontal scrolling passes to macOS.
First question: what should I use to do that? Just point me in the right direction if the topic is too extensive.
Second question: is that possible to do for a sandbox app?

How to webgl detach transform control when click another range not transform control

I'm developing webgl graphics with three.js
While I developing, I'm facing something stuck.
I'm trying to detach transform control when I click another range(ex, grid or another objects) not transform control.
I implemented it by checking clicked position.
For example, I set event handler with mouse down and in there, I store that mouse position. Also I set event handler with mouse up and in there, I compare mouse positions both mouse down and mouse up. If two positions are same, I detach the transform controller. However with this implementation, transform controller was disappeared though I clicked transform controller.
I just wanna detach transform control when I click another position not transform control
Does anyone help me please with this issue?
Thank you in advance.

Discard mouse events on NSWindow based on click position

Let's say I have a floating, borderless, circular NSWindow.
It is circular because the content view simply draws a red circle.
That content view needs to be layer-backed ([contentView setWantsLayer:YES]), because I'm applying CoreAnimations on it, e.g., animated scaling.
Usually, the clickable area of a NSWindow is defined by the transparency of the pixels of the content view. However, once the content view of a NSWindow becomes layer-backed, transparent areas will also receive clicks, unfortunately.
In my case, this is a serious problem, because I only want to receive clicks within the radius. But now, a click within the rect of the window, but beyond the circle radius, will activate the window (and thus, the entire app), which it shouldn't. Also the window is draggable via the corner of its content view.
My initial thought was to implement [NSWindow sendEvent:] in a subclass and check whether the click was performed within the radius, using [theEvent locationInWindow]. I thought I could simply discard the event, if it's beyond the radius, by not calling [super sendEvent:theEvent] then. This however did not work: I noticed, that the mouseDown:; window method is called even before the sendEvent:; method.
I've search a lot but the only idea I found, was to have a proxy like non-layer backed NSWindow on top of the window, which delegates clicks conditionally, but this led to unpredictable UI behavior.
Do you guys have any idea, how to solve it?
So after a few weeks, I came to the following results:
A) Proxy window:
Make use of a non layer-backed proxy window, which is placed on top of the target window as a child window. The proxy window has the same shape, as the target window, and since it is not layer-backed, it will properly receive and ignore events. The proxy window delegates all events to the target window by overwriting sendEvent:. The target window is set to ignore all mouse events.
B) Global Mouse Pointer observation:
Install both a global and local event monitor for NSMouseMovedMask|NSLeftMouseDraggedMask events using addGlobalMonitorForEventsMatchingMask and addLocalMonitorForEventsMatchingMask. The event monitors disable and enable ignoring mouse events on all registered target windows based on the current global mouse position. In the case of circular windows, the distance between the mouse pointer and every target window must be calculated.
Both approaches work well in generally, but I've been experiencing some unpredictable misbehaviors of the child window approach (where the child window is 'out-of-sync' of its parent's position).
UPDATE: Both approaches have some significant disadvantages:
In A), the proxy window sometimes may be out of sync and may be placed slightly off the actual window.
In B), the event monitor has a big impact on battery life while moving the mouse, even if the app is not the front-most application.
If you want to Discard mouseDown event based on position you can use the:
CGPathContainsPoint(path,transform,point,eoFill):Bool
Setup your path to match your graphics. Circles, ellipses, rectangles, triangles or paths and even compositional paths (paths with holes in them).

Gtk/GtkD Detect release of mouse button on window resize?

I'm trying to improve a plotting library that I wrote with GtkD (the D bindings for Gtk). Scatter plots with a lot of points take a long time to resize. I want to rescale the image, allowing pixelation, while the user is dragging the window edge to resize, and only re-render it when the mouse button is released.
Is there an API to detect whether the user is still holding down the mouse button to drag the window edge when a window is being resized? If you are not familiar with GtkD, a response in terms of the C Gtk API would still be appreciated.
you can add a 500 millisecond timeout to the redraw (resetting the timer on each resize event) this allows a user to see a preview while dragging

MVC Mouse events in view design question

This is a design question regarding an MVC implementation. I am creating a 2D graphic app using QT and OpenGL but I do not think the technology matters.
So my view is an openGL widget, whatever is to be drawn is stored n the model and the controller should modify the model and have the OpenGL widget redraw the scene.
The view should capture the following mouse events, MouseRelease, MouseDown and MouseMove and then transfer them to the controller to make the proper decision on what to do when the user clicks or drags the mouse.
I am debating between 2 approaches, incapsulate the mouse handling inside the OpenGL widget and just report the click and drag back to the controller?
Or transfer the mouse events as is to the controller and let it handle all the logic to determine the clicks and drags.
Any advise is very apreciated.
Thank you
I think the widget is going to be getting mouse coordinates in the viewport/"view space" coordinate system, which may not make much sense to the controller. I think your widget should convert the co-ordinates of any clicks and drags to world space, then pass them to the controller.
Why is this good? Because it avoids your controller needing any special knowledge of the viewport/widget, and so preserves encapulation. If you add more viewports/widgets or maybe even a console or a script that also wants to feed the controller, they can all pass their "instructions" in world space and the controller will function quite happily. Your viewport is already "aware" of "world space" and "view space" or it couldn't have rendered your model.

Resources