MKMapView is misaligned to its region property - cocoa

I want to display a certain map region in MKMapView but when I put a rectangular overlay on the map with the very same parameters it is displayed misaligned vertically. It looks good enough close to the equator but the misalignment is increasing with the latitude and the span.
This is for a mac app, but it should be the same for iOS.
This is my relevant code:
MKCoordinateRegion mapRegion = MKCoordinateRegionMake(CLLocationCoordinate2DMake(latCenter, lonCenter), MKCoordinateSpanMake(mapWidthY, mapWidthX));
self.radarMap.region = mapRegion;
CLLocationCoordinate2D coordinates[4];
coordinates[0] = CLLocationCoordinate2DMake(latCenter+mapWidthY/2, lonCenter+mapWidthX/2);
coordinates[1] = CLLocationCoordinate2DMake(latCenter+mapWidthY/2, lonCenter-mapWidthX/2);
coordinates[2] = CLLocationCoordinate2DMake(latCenter-mapWidthY/2, lonCenter-mapWidthX/2);
coordinates[3] = CLLocationCoordinate2DMake(latCenter-mapWidthY/2, lonCenter+mapWidthX/2);
self.boundaryOverlay = [MKPolygon polygonWithCoordinates:coordinates count:4];
[self.radarMap addOverlay:self.boundaryOverlay];
It shows this: (Notice the blue rect overlay is moved up so the upper region is not displayed):
Instead of something like this: (I'm aware of that it is displayed in aspect fill):

When you set the region property of an MKMapView object, MapKit adjusts the value of the region property so that it matches the actual region that's visible. That means that the actual value of region isn't going to be exactly what you assigned to it. So instead of creating the polygon using the region that you assigned to the map, you should get the updated value of region from the MKMapView object and use that to create the polygon.
MKCoordinateRegion mapRegion = MKCoordinateRegionMake(CLLocationCoordinate2DMake(latCenter, lonCenter), MKCoordinateSpanMake(mapWidthY, mapWidthX));
self.radarMap.region = mapRegion;
CLLocationCoordinate2D coordinates[4];
// Get the actual region that MapKit is using
MKCoordinateRegion actualMapRegion = self.radarMap.region;
CLLocationDegrees actualLatCenter = actualMapRegion.center.latitude;
CLLocationDegrees actualLonCenter = actualMapRegion.center.longitude;
CLLocationDegrees actualLatSpan = actualMapRegion.span.latitudeDelta;
CLLocationDegrees actualLonSpan = actualMapRegion.span.longitudeDelta;
// And use that to create the polygon
coordinates[0] = CLLocationCoordinate2DMake(actualLatCenter+ actualLatSpan/2, actualLonCenter+ actualLonSpan/2);
coordinates[1] = CLLocationCoordinate2DMake(actualLatCenter+ actualLatSpan/2, actualLonCenter-actualLonSpan/2);
coordinates[2] = CLLocationCoordinate2DMake(actualLatCenter-actualLatSpan/2, actualLonCenter-actualLonSpan/2);
coordinates[3] = CLLocationCoordinate2DMake(actualLatCenter-actualLatSpan/2, actualLonCenter+ actualLonSpan/2);
self.boundaryOverlay = [MKPolygon polygonWithCoordinates:coordinates count:4];
[self.radarMap addOverlay:self.boundaryOverlay];
I was curious about the increasing misalignment that you were seeing as you moved north. It occurred to me that you were probably using a fixed ratio for the mapWidthX and mapWidthY. MapKit uses a projection that is non-conformal. One consequence of that is that the map gets stretched in the North-South direction, with more stretching the farther you get from the equator.
If you create your region using a ratio that's correct for the equator, it will be incorrect as you move toward the poles. MKMapView will take the region you give it and display something close to it. But the farther you get from the equator, the more of an adjustment it needs to make. And the bigger the difference between the region you give it and the actual region it uses.

Related

Opposite of glscissor in Cocos2D?

I found a class called ClippingNode that I can use on sprites to only display a specified rectangular area: https://github.com/njt1982/ClippingNode
One problem is that I need to do exactly the opposite, meaning I want the inverse of that. I want everything outside of the specified rectangle to be displayed, and everything inside to be taken out.
In my test I'm using a position of a sprite, which will update frame, so that will need to happen to meaning that new clipping rect will be defined.
CGRect menuBoundaryRect = CGRectMake(lightPuffClass.sprite.position.x, lightPuffClass.sprite.position.y, 100, 100);
ClippingNode *clipNode = [ClippingNode clippingNodeWithRect:menuBoundaryRect];
[clipNode addChild:darkMapSprite];
[self addChild:clipNode z:100];
I noticed the ClippingNode class allocs inside but I'm not using ARC (project too big and complex to update to ARC) so I'm wondering what and where I'll need to release too.
I've tried a couple of masking classes but whatever I mask fits over the entire sprite (my sprite covers the entire screen. Additionally the mask will need to move, so I thought glscissor would be a good alternative if I can get it to do the inverse.
You don't need anything out of the box.
You have to define a CCClippingNode with a stencil, and then set it to be inverted, and you're done. I added a carrot sprite to show how to add sprites in the clipping node in order for it to be taken into account.
#implementation ClippingTestScene
{
CCClippingNode *_clip;
}
And the implementation part
_clip = [[CCClippingNode alloc] initWithStencil:[CCSprite spriteWithImageNamed:#"white_board.png"]];
_clip.alphaThreshold = 1.0f;
_clip.inverted = YES;
_clip.position = ccp(self.boundingBox.size.width/2 , self.boundingBox.size.height/2);
[self addChild:_clip];
_img = [CCSprite spriteWithImageNamed:#"carrot.png"];
_img.position = ccp(-10.0f, 0.0f);
[_clip addChild:_img];
You have to set an extra flag for this to work though, but Cocos will spit out what you need to do in the console.
I once used CCScissorNode.m from https://codeload.github.com/NoodlFroot/ClippingNode/zip/master
The implementation (not what you are looking for the inverse) was something :
CGRect innerClippedLayer = CGRectMake(SCREENWIDTH/14, SCREENHEIGHT/6, 275, 325);
CCScissorNode *tmpLayer = [CCScissorNode scissorNodeWithRect:innerClippedLayer];
[self addChild:tmpLayer];
So for you it may be like if you know the area (rectangle area that you dont want to show i.e. inverse off) and you know the screen area then you can deduct the rectangle are from screen area. This would give you the inverse area. I have not did this. May be tomorrow i can post some code.

pygame rotation around center point

I've read several other posts about rotating an image around the center point, but I've yet to figure it out. I even copy pasted one of the solutions posted on another SO question and it didn't work.
This is my code
def rotate(self, angle):
self.old_center = self.surface.get_rect().center
self.surface = pygame.transform.rotate(self.surface, angle)
self.surface.get_rect(center = self.old_center)
it's inside a class which contains the surface.
When I call this method the image rotates but also translates, and gets distorted.
You are not assigning the new rect, you should do something like:
self.rect = self.surface.get_rect(center = self.old_center)
And you should always keep the original surface and rotate from the original, that way distortion doesn't accumulate when rotating multiple times.
Update
If you don't want to keep track of the rect object, you can do it every time you blit. If you keep the center coordinates.
Example:
rect = object.surface.get_rect()
rect.center = object.center
display.blit(object.surface, rect.topleft)

Core-Plot Mac: show label when mouseover point in graph

I'm building a mac osx application with a nice graph in it made with Core-Plot. Its a line chart (scatterplot) with multiple points on it visualized with a plotsymbol circle.
My goal is to have a shown a label with the value of the point when a user mouse over a point in the chart.
I already have added NSTracking to the graph and this works but i'm lost in how to translate the plot point/plotsymbol to the coordinates so i know when it rollovers a point and show a label.
Somebody has an idea?
Thank you all
My solution would be something along those lines:
(I assume that the graph does not display any labels. Labels are being displayed only when the mouse is over a point on the graph.)
In the place where you handle the mouse logic, you would do:
NSDecimalNumber *tickLocation = [NSDecimalNumber numberWithDouble:"The relevant axis value of the object you have the mouse over"];
NSString *labelText = #"The text of the label";
NSMutableArray *customLabels = [NSMutableArray arrayWithCapacity:1];
CPTAxisLabel *label = [[CPTAxisLabel alloc] initWithText: labelText textStyle:axisSet."Whatever axis you want -X/Y".labelTextStyle];
label.tickLocation = [tickLocation decimalValue];
label.offset = axisSet."Whatever axis you want -X/Y".labelOffset + axisSet."Whatever axis you want -X/Y".majorTickLength;
label.rotation = M_PI/4;
[customLabels addObject:label];
axisSet."Whatever axis you want -X/Y".axisLabels = [NSSet setWithArray:customLabels];
Hope this helps.
You can use the -indexOfVisiblePointClosestToPlotAreaPoint: method to find out which plot symbol is closest to a certain pixel. This method returns the datasource index of the point; you can look at the original data provided by your datasource to get the actual values. The plotSymbolMarginForHitDetection property controls how close the point has to be to the given point before it registers as a hit.

Convert a given point from the window’s base coordinate system to the screen coordinate system

I am trying to figure out the way to convert a given point from the window’s base coordinate system to the screen coordinate system. I mean something like - (NSPoint)convertBaseToScreen:(NSPoint)point.
But I want it from quartz/carbon.
I have CGContextRef and its Bounds with me. But the bounds are with respect to Window to which CGContextRef belongs. For Example, if window is at location (100, 100, 50, 50) with respect to screen the contextRef for window would be (0,0, 50, 50). i.e. I am at location (0,0) but actually on screen I am at (100,100). I
Any suggestion are appreciated.
Thank you.
The window maintains its own position in global screen space and the compositor knows how to put that window's image at the correct location in screen space. The context itself, however doesn't have a location.
Quartz Compositor knows where the window is positioned on the screen, but Quartz 2D doesn't know anything more than how big the area it is supposed to draw in is. It has no idea where Quartz Compositor is going to put the drawing once it is done.
Similarly, when putting together the contents of a window, the frameworks provide the view system. The view system allows the OS to create contexts for drawing individual parts of a window and manages the placement of the results of drawing in those views, usually by manipulating the context's transform, or by creating temporary offscreen contexts. The context itself, however, doesn't know where the final graphic is going to be rendered.
I'm not sure if you can use directly CGContextRef, you need window or view reference or something like do the conversion.
The code I use does the opposite convert mouse coordinates from global (screen) to view local and it goes something like this:
Point mouseLoc; // point you want to convert to global coordinates
HIPoint where; // final coordinates
PixMapHandle portPixMap;
// portpixmap is needed to get correct offset otherwise y coord off at least by menu bar height
portPixMap = portPixMap = GetPortPixMap( GetWindowPort( GetControlOwner( view ) ) );
QDGlobalToLocalPoint(GetWindowPort( GetControlOwner( view ), &mouseLoc);
where.x = mouseLoc.h - (**portPixMap).bounds.left;
where.y = mouseLoc.v - (**portPixMap).bounds.top;
HIViewConvertPoint( &where, NULL, view );
so I guess the opposite is needed for you (haven't tested if it actually works):
void convert_point_to_screen(HIView view, HIPoint *point)
{
Point point; // used for QD calls
PixMapHandle portPixMap = GetPortPixMap( GetWindowPort( GetControlOwner( view ) ) );
HIViewConvertPoint( &where, view, NULL ); // view local to window local coordtinates
point.h = where->x + (**portPixMap).bounds.left;
point.w = where->y + (**portPixMap).bounds.top;
QDLocalToGlobalPoint(GetWindowPort( GetControlOwner( view ), &point);
// convert Point to HIPoint
where->x = point.h;
where->y = point.v;
}

Time Machine style Navigation

I've been doing some programming for iPhone lately and now I'm venturing into the iPad domain. The concept I want to realise relies on a navigation that is similar to time machine in osx. In short I have a number of views that can be panned and zoomed, as any normal view. However, the views are stacked upon each other using a third dimension (in this case depth). the user will the navigate to any view by, in this case, picking a letter, whereupon the app will fly through the views until it reaches the view of the selected letter.
My question is: can somebody give the complete final code for how to do this? Just kidding. :) What I need is a push in the right direction, since I'm unsure how to even start doing this, and whether it is at all possible using the frameworks available. Any tips are appreciated
Thanks!
Core Animation—or more specifically, the UIView animation model that's built on Core Animation—is your friend. You can make a Time Machine-like interface with your views by positioning them in a vertical line within their parent view (using their center properties), having the ones farther up that line be scaled slightly smaller than the ones below (“in front of”) them (using their transform properties, with the CGAffineTransformMakeScale function), and setting their layers’ z-index (get the layer using the view’s layer property, then set its zPosition) so that the ones farther up the line appear behind the others. Here's some sample code.
// animate an array of views into a stack at an offset position (0 has the first view in the stack at the front; higher values move "into" the stack)
// took the shortcut here of not setting the views' layers' z-indices; this will work if the backmost views are added first, but otherwise you'll need to set the zPosition values before doing this
int offset = 0;
[UIView animateWithDuration:0.3 animations:^{
CGFloat maxScale = 0.8; // frontmost visible view will be at 80% scale
CGFloat minScale = 0.2; // farthest-back view will be at 40% scale
CGFloat centerX = 160; // horizontal center
CGFloat frontCenterY = 280; // vertical center of frontmost visible view
CGFloat backCenterY = 80; // vertical center of farthest-back view
for(int i = 0; i < [viewStack count]; i++)
{
float distance = (float)(i - offset) / [viewStack count];
UIView *v = [viewStack objectAtIndex:i];
v.transform = CGAffineTransformMakeScale(maxScale + (minScale - maxScale) * distance, maxScale + (minScale - maxScale) * distance);
v.alpha = (i - offset > 0) ? (1 - distance) : 0; // views that have disappeared behind the screen get no opacity; views still visible fade as their distance increases
v.center = CGPointMake(centerX, frontCenterY + (backCenterY - frontCenterY) * distance);
}
}];
And here's what it looks like, with a couple of randomly-colored views:
do you mean something like this on the right?
If yes, it should be possible. You would have to arrange the Views like on the image and animate them going forwards and backwards. As far as I know aren't there any frameworks for this.
It's called Cover Flow and is also used in iTunes to view the artwork/albums. Apple appear to have bought the technology from a third party and also to have patented it. However if you google for ios cover flow you will get plenty of hits and code to point you in the right direction.
I have not looked but would think that it was maybe in the iOS library but i do not know for sure.

Resources