Get position of insertion / point (not of the mouse pointer) - windows

I'd like to get the position of the caret, also known as insertion point or cursor, on the screen, as opposed to the position of the mouse pointer. Is that possible at all?

Related

Which algorithm browsers use to detect mouse over an element with rounded edges and rotation?

How elements with rounded edges (border-radius) or rotation are calculated along the mouse point? Is it SAT? How corner angles are defined?
Mouse point/pixel represented as "cursor"
You can use the SDF of a 2D box minus the radius of the rounded corners and as said in the video, transform the space to make the translation and the rotation. To know if the cursor is in the box, you can just check if the result is negative or not.

Check if a point lies within an axis-aligned rectangle efficiently as possible, including the edge?

I am working on an interactive web application, and I'm currently working on implementing a multi-select feature similar to the way windows allows you to select multiple desktop icons by dragging a rectangle.
Due to limitations of the library I'm required to use, implementing this has already become quite resource intensive:
On initial click, store the position of the mouse cursor.
On each pixel that the mouse cursor moves, perform the following:
Destroy the previous selection rectangle, if it exists, so it doesn't appear on the screen anymore.
Calculate the width and height of the new selection retangle using the current cursor position and the current cursor position.
Create a new selection rectangle using the original cursor position, the width and the height
Display this rectangle on the screen
As you can see, there are quite a few things happening every time the cursor moves a single pixel. I've looked into this as much as I can and there's no way I can make it any more efficient or any faster.
My next step is actually selecting the objects on the screen when the selection rectangle moves over them. I need to implement this algorithm myself so I have freedom to make it as efficient/fast as possible. What I need to do is iterate through the objects on the screen and check each one to see if it lies in the rectangle. So the loop here is going to consume more resources. So, I need the checking to be done as efficiently as possible.
Each object that can be selected can be represented by a single point, P(x, y).
How can I check if P(x, y) is within the rectangles I create in the fastest/most efficient way?
Here's the relevant information:
The can be an arbitrary number of objects that can be selected on the screen at any one time
The selection rectangles will always be axis-aligned
The information I have about the rectangles is their original point, their height, and their width.
How can I achieve what I need to do as fast as possible?
Checking whether point P lies inside rectangle R is simple and fast
(in coordinate system with origin in the top left corner)
(P.X >= R.Left) and (P.X <= R.Right) and (P.Y >= R.Top) and (P.Y <= R.Bottom)
(precalculate Right and Bottom coordinates of rectangle)
Perhaps you could accelerate overall algorithm if objects fulfill to some conditions, that allow don't check all the objects at every step.
Example: sort object list by X coordinate and check only those objects that lies in Left..Right range
More advanced approach: organize objects in some space-partitioning data structure like kd-tree and execute range search very fast
You can iterate through every object on screen and check whether it lies in the rectangle in a Cartesian coordinate system using the following condition:
p.x >= rect.left && p.x <= rect.right && p.y <= rect.top && p.y >= rect.bottom
If are going to have not more than 1000 points on screen, just use the naive O(n) method by iterating through each point. If you are completely sure that you need to optimize this further, read on.
Depending on the frequency of updating the points and number of points being updated each frame, you may want to use a different method potentially involving a data structure like Range Trees, or settle for the naive O(n) method.
If the points aren't going to move around much and are sparse (i.e. far apart from each other), you can use a Range Tree or similar for O(log n) checks. Bear in mind though that updating such a spatial partitioning structure is resource intensive, and if you have a lot of points that are going to be moving around quite a bit, you may want to look at something else.
If a few points are going to be moving around over large distances, you may want to look at partitioning the screen into a grid of "buckets", and check only those buckets that are contained by the rectangle. Whenever a point moves from one bucket to another, the grid will have to update the affected buckets.
If memory is a constraint, you may want to look at using a modified Quad Tree which is limited by tree depth instead of bucket size, if the grid approach is not efficient enough.
If you have a lot of points moving around a lot every frame, I think you may be better of with the grid approach or just with the naive O(n) approach. Experiment and choose an approach that best suites your problem.

Getting position & rotation in parent space

I have few bones.
Bone001:
Bone002:
The are aligned in the same direction. Bone001 has an rotation (in both World and Parent space). Bone002 has the same rotation as Bone001 in World space and it's rotation in Parent space (Boone001) is 0.
I want to get the position and rotation of Bone002 in Parent space (which should be 0).
I have tried (according to official documentation):
--each and every returns the same World space pos (as $Bone002.transform.pos)
(in coordsys parent $Bone002.transform.pos)
(in coordsys local $Bone002.transform.pos)
(in coordsys $Bone001 $Bone002.transform.pos)
$Bone002.transform.pos *= inverse $Bone002.transform
But each and every of them returns the Bone002 World space position (and not the Parent space one). Same for rotation.
You were close with the last one, to get transformation in another object's space, multiply by the inverse of that object's transform. Here, that would be:
obj.transform.pos * inverse obj.parent.transform

Photoshop smart object: get rotation angle via Applescript?

I have tried and failed to find an Applescript code that returns a smart object's current rotation angle in Photoshop. Anyone have an idea of where that property is listed? I'm beginning to think this feature isn't currently supported by Applescript.
In Photoshop, objects like a selection has no angle value, because it means nothing: if your selection is made by multiple segments making a complexe shape, there is no mathematical way you can define angle for that shape !
However, you can work with boundary rectangle (which includes that shape). You can rotate this complete boundary (i.e. the selection) and then you will get a new boundary (new rectangle where new rotated shape fits in).
A boundary rectangle is made of a list of for values :
top left corner horizontal position (X1)
top left corner vertical position (Y1)
bottom right corner horizontal position (X2)
bottom right corner vertical position (Y2)
Positions are real numbers, starting with border of canvas (not border of layer ! so you may have negative values). The units depends of the unit of measure of the document.
Once that's clear (I hope !) if you use mathematical calculation between initial boundary and new boundary, you can calculate the rotation angle:
(Pythagore triangle)
If you assume that initial rectangle borders were vertical and horizontal :
cosinus (Teta) = (X2-X1) / (X'2 - X'1)
Teta = angle you are looking for
X1, X2 are the positions of the boundary corners before rotation and X'1, X'2 are position of same corners after rotation.
Please note that this method is OK for selection (any shape), or layers.
It should also be OK for the full canvas, but I never test it for canvas.

Efficient approximation of rotation

I am trying to write an algorithm that rotates one square around its centre in 2D until it matches or is "close enough" to the rotated square which started in the same position, is the same size and has the same centre. Which is fairly easy.
However the corners of the square need to match up, thus to get a match the top right corner of the square to rotate must be close enough to what was originally the top right corner of the rotated square.
I am trying to make this as efficient as possible, so if the closeness of the two squares based on the above criteria gets worse I know I need to try and rotate back in the opposite direction.
I have already written the methods to rotate the squares, and test how close they are to one another
My main problem is how should I change the amount to rotate on each iteration based on how close I get
E.g. If the current measurement is closer than the previous, halve the angle and go in the same direction otherwise double the angle and rotate in the opposite direction?
However I don't think this is quite a poor solution in terms of efficiency.
Any ideas would be much appreciated.
How about this scheme:
Rotate in 0, 90, 180, 270 angle (note that there are efficient algorithm for these special rotations than the generic rotation); compare each of them to find the quadrant you need to be searching for. In other word, try to find the two axis with the highest match.
Then do a binary search, for example when you determined that your rotated square is in the 90-180 quadrant, then partition the search area into two octants: 90-135 and 135-180. Rotate by 90+45/2 and 180-45/2 and compare. If the 90+45/2 rotation have higher match value than the 180-45/2, then continue searching in the 90-135 octant, otherwise continue searching in the 135-180 octant. Lather, Rinse, Repeat.
Each time in the recursion, you do this:
partition the search space into two orthants (if the search space is from A to B, then the first orthant is A + (A + B) / 2 and the second orthant is B - (A + B) / 2)
check the left orthant: rotate by A + (A + B) / 4. Compare.
check the right orthant: rotate by B - (A + B) / 4. Compare.
Adjust the search space, either to left orthant or the right orthant based on whether the left or right one is have higher match value.
Another scheme I can think of is, instead of trying to rotate and search, you try to locate the "corners" of the rotated image.
If your image does not contain any transparencies, then there are four points located at sqrt(width^2+height^2) away from the center, whose color are exactly the same as the corners of the unrotated image. This will limit the number of rotations you will need to search in.
...also, to build upon the other suggestions here, remember that for any rectangle you rotate around its center, you only need to calculate the rotation of a single corner. You can infer the other three corners by adding or substracting the same offset that you calculated to get the first corner. This should speed up your calculations a bit (assuming [but not thinking] that this is a bottleneck here).

Resources