mouse event location appears to be incorrect - cocoa

I've got a simple test app with a custom view (set up in Interface Builder) with its origin at (20, 20). When I get a mouse event at the lower left most point in the view, the event's location is being reported as (20, 21) and the converted point at (0, 1). I'm using pixie to make sure I'm clicking right at the lower left corner. (If I move one pixel down, I get nothing, indicating I'm outside the view.)
Here's the mouseDown code:
- (void)mouseDown:(NSEvent *)e
{
NSPoint pt = [e locationInWindow];
NSLog(#"Location in Window: %.0f, %.0f", pt.x, pt.y);
pt = [self convertPoint:pt fromView:nil];
NSLog(#"Converted Point: %.0f, %.0f", pt.x, pt.y);
}
Can anyone explain why the y position appears to be off by one?

This is correct behavior. From the Cocoa Drawing Guide:
Important: Cocoa event objects return y coordinate values that are
1-based instead of 0-based. Thus, a
mouse click on the bottom left corner
of a window or view would yield the
point (0, 1) in Cocoa and not (0, 0).
Only y-coordinates are 1-based.

Fencepost error? In your code, or the library or your use of the library? Rounding errors as you are doing integral pixel reporting of floating point data?

Related

SKSpriteNode not showing movement when told to move?

SKSpriteNode not showing movement when told to move ?
As explained shortly, I believe that I need to convert SKScene coordinates to SKView coordinates. So my question reduces to "How do I do that?"
Specifics:
I have a .sks file from which I manually extract size and position data for a SKSpriteNode as its supposed to move, which movement is inhibited only by the surrounding wall from which it bounces when it hits same.
This SKSpriteNode's changing position within this wall is based on its anchor = 0.5, 0.5.
Every time the object moves, I call this, for example:
func drawBall() {
newPosition = CGPoint(x: ballPosX, y: ballPosY)
moveTO = SKAction.move(to: newPosition, duration: TimeInterval(0))
myBall!.run(moveTO)
}
The fact that I do not see physical movement indicates that I may have a coordinate problem.
Specifically, the fact that the position of the SKSpriteNode is based on its anchor = 0.5, 0.5 shows me that I am dealing with SKScene coordinates and I need to convert these coordinates to SKView coordinates.
If short how do I do that? .. or .. if I have another error, how do I correct it?

Cocoa NSPoint to Quartz NSPoint - Flip Y cordinate

In macOS programming, We know that
Quartz uses a coordinate space where the origin (0, 0) is at the top-left of the primary display. Increasing y goes down.
Cocoa uses a coordinate space where the origin (0, 0) is the bottom-left of the primary display and increasing y goes up.
Now am using a Quartz API - CGImageCreateWithImageInRect to crop an image , which takes a rectangle as a param. The rect has the Y origin coming from Cocoa's mousedown events.
Thus i get crops at inverted locations...
I tried this code to flip my Y co-ordinate in my cropRect
//Get the point in MouseDragged event
NSPoint currentPoint = [self.view convertPoint:[theEvent locationInWindow] fromView:nil];
CGRect nsRect = CGRectMake(currentPoint.x , currentPoint.y,
circleSizeW, circleSizeH);
//Now Flip the Y please!
CGFloat flippedY = self.imageView.frame.size.height - NSMaxY(nsRectFlippedY);
CGRect cropRect = CGRectMake(currentPoint.x, flippedY, circleSizeW, circleSizeH);
But for the areas on the top, i wrong FlippedY coordinates.
If i click near top edge of the view, i get flippedY = 510 to 515
At the top edge it should be between 0 to 10 :-|
Can someone point me to the correct and reliable way to Flip
the Y coordinate in such circumstances? Thank you!
Here is sample project in GitHub highlighting the issue
https://github.com/kamleshgk/SampleMacOSApp
As Charles mentioned, the Core Graphics API you are using requires coordinates relative to the image (not the screen). The important thing is to convert the event location from window coordinates to the view which most closely corresponds to the image's location and then flip it relative to that same view's bounds (not frame). So:
NSView *relevantView = /* only you know which view */;
NSPoint currentPoint = [relevantView convertPoint:[theEvent locationInWindow] fromView:nil];
// currentPoint is in Cocoa's y-up coordinate system, relative to relevantView, which hopefully corresponds to your image's location
currentPoint.y = NSMaxY(relevantView.bounds) - currentPoint.y;
// currentPoint is now flipped to be in Quartz's y-down coordinate system, still relative to relevantView/your image
The rect you pass to CGImageCreateWithImageInRect should be in coordinates relative to the input image's size, not screen coordinates. Assuming the size of the input image matches the size of the view to which you've converted your point, you should be able to achieve this by subtracting the rect's corner from the image's height, rather than the screen height.

c++ SDL how to create a vector from point a to b

I've made an Object called Player and an Object called Zombie, I would like to add a shooting function for Player, and the bullet to go from point Player to the point where I am holding my mouse, how can this be achived?
Let's eliminate the other directions first and just imagine that the direction is from left to right.
You first need to get the mouse's position which is possible by using
SDL_GetMouseState(int* placeX, int* placeY)
this pushes the mouses's x location to placeX and the mouses's y location to placeY.
Now you also need to get the position of the player, this could be done by getting the player's texture's rect while in the loop.
So if the character is at
(x,y) (0, 10)
with a
(w, h) (20, 40)
and you have the bullet source right at the middle-right of the character, then that makes it at
(20, 20)
then the actual current bullet source is at
(x, y) -> (20, 30) (basically rect.x + bulletsource.x and rect.y +
bulletsource.y).
Now all you have to do is something like
if(bullet.x < placeX)//this makes it look like the bullet is moving to the right
{
bullet.x++ //this moves the bullet closer to the right
Redraw bullet //you have to redraw to see the changes
}
do the same thing for y, it doesn't really have to be an if statement.
The same logic applies when you have multiple bullet directions.
You just have to think of the bullet's source as the center of the x and y axis and check if the mouseclick is within a certain quadrant (diagonal) or near the line of an axis(straight bullet).
Once you know what quadrant or line the mouse is at, then you just change the texture to be used(diagonal bullet facing north-east or just a plain ol' simple bullet facing the north).
Thanks, for the help, but I solved it by using atan2, and calculated the angle. When I had the angle I just added the value of speed * cos(angle) to the x variable and speed * sin(angle) to the y variable.

Finding World Space Coordinates of a Unity UI Element

So according to the Unity documentation RectTransform.anchoredPosition will return the screen coordinates of a UI element if the anchors are touching at the pivot point of the RectTransform. However, if they are separated (in my case positioned at the corners of the rect) they will give you the position of the anchors relative to the pivot point. This is wonderful unless you want to keep appropriate dimensions of a UI object through multiple resolutions and position a different object based on that position at the same time.
Let's break this down. I have object1 and object2. object1 is positioned at (322.5, -600) and when the anchor points meet at the center (pivot) of the object anchoredPosition returns just that and object2 is positioned just fine. On the other hand once I have placed the anchors at the 4 corners of object1 anchoredPosition returns (45.6, -21). Thats just no good. I've even tried using Transform.position and then Camera.WorldToScreenPoint(), but that does just about as much to getting me to my goal.
I was hoping that you might be able to help me find a way to get the actual screen coordinates of this object. If anyone has any insight into this subject it would be greatly appreciated.
Notes: I've already attempted to use RectTranfrom.rect.center and it returned (0, 0)
I've also looked into RectTransformUtility and those helper functions have done all of squat.
anchoredPosition returns "The position of the pivot of this RectTransform relative to the anchor reference point." It has nothing to do with screen coordinates or world space.
If you're looking for the screen coordinates of a UI element in Unity, you can either use rectTransform.TransformPoint or rectTransform.GetWorldCorners to get any of the Vector3s you'd need in world space. Which ever you decide to go with, you can then pass them into Camera.WorldToScreenPoint()
Here's a glimpse on how finding world space coordinates of UI elements works if your stuck and need to roll your own transformations from view-space to world-space.
This may be beneficial if say you need something more than rectTransform.TransformPoint or want to know how this works.
Ok, so you want to do a transformation from normalised UI coordinates in the range [-1, 1], and de-project them back into world space coordinates.
To do this you could use something like Camera.main.ScreenToWorldPoint or Camera.main.ViewportToWorldPoint, or even rectTransform.position if your a lacker.
This is how to do it with just the camera's projection matrix.
/// <summary>
/// Get the world position of an anchor/normalised device coordinate in the range [-1, 1]
/// </summary>
private Vector3 GetAnchor(Vector2 ndcSpace)
{
Vector3 worldPosition;
Vector4 viewSpace = new Vector4(ndcSpace.x, ndcSpace.y, 1.0f, 1.0f);
// Transform to projection coordinate.
Vector4 projectionToWorld = (_mainCamera.projectionMatrix.inverse * viewSpace);
// Perspective divide.
projectionToWorld /= projectionToWorld.w;
// Z-component is backwards in Unity.
projectionToWorld.z = -projectionToWorld.z;
// Transform from camera space to world space.
worldPosition = _mainCamera.transform.position + _mainCamera.transform.TransformVector(projectionToWorld);
return worldPosition;
}
I've found out that you can multiply your coordinate by the 2 times the camera size and divide it to screen height.
I have a panel placed at (0, 1080) on a fullHD screen (1920 x 1080), camera size is 7. So the Y coordinate in world space will be 1080 * 7 * 2 / 1080 = 14 -> (0, 14).
ScreenToWorldPoint convert canvas position to world position :
Camera.main.ScreenToWorldPoint(transform.position)

What is the Google Map zoom algorithm?

I'm working on a map zoom algorithm which change the area (part of the map visible) coordinates on click.
For example, at the beginning, the area has this coordinates :
(0, 0) for the corner upper left
(100, 100) for the corner lower right
(100, 100) for the center of the area
And when the user clicks somewhere in the area, at a (x, y) coordinate, I say that the new coordinates for the area are :
(x-(100-0)/3, y-(100-0)/3) for the corner upper left
(x+(100-0)/3, y+(100-0)/3) for the corner upper right
(x, y) for the center of the area
The problem is that algorithm is not really powerful because when the user clicks somewhere, the point which is under the mouse moves to the middle of the area.
So I would like to have an idea of the algorithm used in Google Maps to change the area coordinates because this algorithm is pretty good : when the user clicks somewhere, the point which is under the mouse stays under the mouse, but the rest of area around is zoomed.
Somebody has an idea of how Google does ?
Lets say you have rectangle windowArea which holds drawing area coordinates(i.e web browser window area in pixels), for example if you are drawing map on the whole screen and the top left corner has coordinates (0, 0) then that rectangle will have values:
windowArea.top = 0;
windowArea.left = 0;
windowArea.right = maxWindowWidth;
windowArea.bottom = maxWindowHeight;
You also need to know visible map fragment, that will be longitude and latitude ranges, for example:
mapArea.top = 8.00; //lat
mapArea.left = 51.00; //lng
mapArea.right = 12.00; //lat
mapArea.bottom = 54.00; //lng
When zooming recalculate mapArea:
mapArea.left = mapClickPoint.x - (windowClickPoint.x- windowArea.left) * (newMapWidth / windowArea.width());
mapArea.top = mapClickPoint.y - (windowArea.bottom - windowClickPoint.y) * (newMapHeight / windowArea.height());
mapArea.right = mapArea.left + newWidth;
mapArea.bottom = mapArea.top + newHeight;
mapClickPoint holds map coordinates under mouse pointer(longitude, latitude).
windowClickPoint holds window coordinates under mouse pointer(pixels).
newMapHeight and newMapWidth hold new ranges of visible map fragment after zoom:
newMapWidth = zoomFactor * mapArea.width;//lets say that zoomFactor = <1.0, maxZoomFactor>
newMapHeight = zoomFactor * mapArea.height;
When you have new mapArea values you need to stretch it to cover whole windowArea, that means mapArea.top/left should be drawn at windowArea.top/left and mapArea.right/bottom should be drawn at windowArea.right/bottom.
I am not sure if google maps use the same algorithms, it gives similar results and it is pretty versatile but you need to know window coordinates and some kind of coordinates for visible part of object that will be zoomed.
Let us state the problem in 1 dimension, with the input (left, right, clickx, ratio)
So basically, you want to have the ratio to the click from the left and to the right to be the same:
Left'-clickx right'-clickx
------------- = --------------
left-clickx right-clickx
and furthermore, the window is reduced, so:
right'-left'
------------ = ratio
right-left
Therefore, the solution is:
left' = ratio*(left -clickx)+clickx
right' = ratio*(right-clickx)+clickx
And you can do the same for the other dimensions.

Resources