Greetings,
I'm trying to draw a circle on a map. all the separate pieces of this project work independently but when I put them all together it breaks.
I setup my UI in my viewDidLoad, retaining most of it.
I then use touch events to call a my refresh map method:
-(void)refreshMap{
NSString *thePath = [NSString stringWithFormat:#"http://maps.google.com/staticmap?center=%f,%f&zoom=%i&size=640x640&maptype=hybrid",viewLatitude, viewLongitude, zoom];
NSURL *url = [NSURL URLWithString:thePath];
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *mapImage = [[UIImage alloc] initWithData:data];
mapImage = [self addCircle:(mapImage) influence:(70) latCon:(320) lonCon:(320)];
NSLog(#"-- mapimageview retaincount %i",[mapImage retainCount]);
mapImageView.image = mapImage;
[mapImage release];}
Setup like this it will load the map with a circle once, but if the map is refreshed again it crashes.
If I comment out the mapImage release it works repeatedly but causes a memory leak.
The addCircle method I'm using:
-(UIImage *)addCircle:(UIImage *)img radius:(CGFloat)radius latCon:(CGFloat)lat lonCon:(CGFloat)lon{
int w = img.size.width;
int h = img.size.height;
lon = h - lon;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst);
//draw the circle
CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage);
CGRect leftOval = {lat- radius/2, lon - radius/2, radius, radius};
CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 0.3);
CGContextAddEllipseInRect(context, leftOval);
CGContextFillPath(context);
CGImageRef imageMasked = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return [UIImage imageWithCGImage:imageMasked];}
Any insight/advise is greatly appreciated!
UIImage *mapImage = [[UIImage alloc] initWithData:data];
mapImage = [self addCircle:(mapImage) influence:(70) latCon:(320) lonCon:(320)];
That's not good. You're losing the reference to the contents of mapImage when you reassign it on the second line. The easiest way to fix this if probably to just add an additional variable, so you can keep track of both images.
Related
I have an UIImageView that I am moving around a circle with CGAffineTransformRotate. Works great! But when the user press a stop bottom I would like to the the actual x- / y- position of the UIImageView. So far I am always getting the original x- / y- values from when the UIImageView was created.
Is there a way to get the actual position, when the user stopped the rotation?
I have found the solution and share it in case someone is running a similar case:
From UIBezierPath I use the bounds information and this give me the position where the UIImageView stopped. Here the code:
UIBezierPath *path = [[UIBezierPath alloc] init];
[path addArcWithCenter:CGPointMake(iMiddleX, iMiddleY) radius:flR startAngle:degreesToRadians(flDegrees-0.01) endAngle:degreesToRadians(flDegrees) clockwise:YES];
CAKeyframeAnimation *pathAnimation = [CAKeyframeAnimation animationWithKeyPath:#"position"];
pathAnimation.calculationMode = kCAAnimationPaced;
pathAnimation.fillMode = kCAFillModeForwards;
pathAnimation.removedOnCompletion = NO;
pathAnimation.repeatCount = 1;
pathAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionDefault];
pathAnimation.duration = 1.0;
pathAnimation.path = path.CGPath;
NSInteger iX = path.bounds.origin.x;
NSInteger iY = path.bounds.origin.y;
It's been a while, as I was hospitalized for 3 months after a motorcycle accident.
So I just got to renew my apple programming subscription :-)
I have another question that has been on my mind for quite some time.
In my iPad application I draw a triangle in the center of an iPad like this:
- (void)initTriangle
{
CGRect screenBound = [[UIScreen mainScreen] bounds];
CGSize screenSize = screenBound.size;
CGFloat screenWidth = screenSize.width;
CGFloat screenHeight = screenSize.height;
// draw triangle (TRIANGLE)
CGMutablePathRef path = CGPathCreateMutable();
CGPathMoveToPoint(path,NULL, 0.5*screenWidth, 0.5*screenHeight-25);
CGPathAddLineToPoint(path, NULL, 0.5*screenWidth-25, 0.5*screenHeight+25);
CGPathAddLineToPoint(path, NULL, 0.5*screenWidth+25, 0.5*screenHeight+25);
CGPathAddLineToPoint(path, NULL, 0.5*screenWidth, 0.5*screenHeight-25);
CAShapeLayer *triangle = [CAShapeLayer layer];
[triangle setPath:path];
[triangle setFillColor:[[UIColor blackColor] CGColor]];
[[[self view] layer] addSublayer:triangle];
CGPathRelease(path);
}
And I call this from my viewDidLoad like this:
[self initTriangle];
Now I'm trying to rotate this triangle with the rotation of my iPad around Z-Axis while laying flat on the table. I have a function that gives me the yaw readings in float and I'm calling my
-(void)updateTriangleWithYaw:(float)yaw
method, but I don't know what to exactly put in there to make it rotate.
here is what my method looks like so far:
-(void)updateTriangleWithYaw:(float)yaw
{
CGRect screenBound = [[UIScreen mainScreen] bounds];
CGSize screenSize = screenBound.size;
CGFloat screenWidth = screenSize.width;
CGFloat screenHeight = screenSize.height;
NSLog(#"YAW: %f", yaw);
Z += 2 * yaw;
Z *= 0.8;
CGFloat newR = R + 10 * yaw;
self.triangle.frame = CGRectMake(0.5*screenWidth, 0.5*screenHeight, newR, newR);
}
Any help will be greatly appreciated!
Thanks and be safe guys!!
You should set the layer's affineTransform. You can apply a rotation transform like:
[self.triangle setAffineTransform:CGAffineTransformMakeRotation(yaw)];
This method, setAffineTransform is a convenience to set the transform property of the layer, which is a more general type of transform CATransform3D. You can also set the transform of the layer directly, and if you want to do that you can make a rotation about the z-axis like:
self.triangle.transform = CATransform3DMakeRotation(yaw, 0, 0, 1);
In this case the first argument is the angle (in radians) and the last three arguments specify the axis of rotation.
Note that you should not assign or depend on the value of the frame property of a layer whose transform is not the identity (CGAffineTransformIdentity). When you use the transform property you should set the size and position of your layer by assigning the layer's center and bounds properties, and similarly you should read the center and bounds when you want to find out information about the layer's position and size.
Since I've installed the Xcode 6.0.1 I'm having my OpenGL ES 1 layer displayed incorrectly on any simulated device (as well as on real hardware: iPhone 4S with iOS 8) – wrong size and position of the layer.
Changing the glViewport parameters doesn't make any difference. I can actually comment it out and it'll look the same.
PARTIAL SOLUTION:
I've checked and then unchecked the "Use Auto Layout" box so that Xcode updated my window to newer version requirements. Now everything looks okay on iPhone 4S, but still the size of the window on other devices is messed.
Anyone got their OpenGL ES 1 code updated to new devices?
One possible workaround is to get the dimensions (width & height) and decide on real width depending on which dimension is bigger something like this:
CGRect screenBounds = [[UIScreen mainScreen] bounds];
float scale = [UIScreen mainScreen].scale;
float width = screenBounds.size.width;
float height = screenBounds.size.height;
NSLog(#"scale: %f, width: %f, height: %f", scale, width, height);
float w = width > height ? width : height;
if (scale == 2.0f && w == 568.0f) { ...
I have the similar problem with my OpenGL ES 1 app.
The following code always returns the renderbuffer size in Portrait mode (shouldAutorotate is NO, so autorotate is disabled in my app):
glGetRenderbufferParameterivOES( GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth );
glGetRenderbufferParameterivOES( GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight );
But now (Xcode 6.0.1, iOS8) this size depends on the device orientation. So i get the wrong renderbuffer size.
"Use Auto Layout" check+uncheck didn't help me.
I've managed to display the render buffer properly by minding the fact that [[UIScreen mainScreen] bounds].size is orientation dependent on iOS 8 and coding the view programmatically. So my delegate looks like this:
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *) {
CGRect screenBound = [[UIScreen mainScreen] bounds];
CGSize screenSize = screenBound.size;
screenHeight = screenSize.height;
screenWidth = screenSize.width;
window = [[UIWindow alloc] initWithFrame:CGRectMake(0, 0, screenWidth, screenHeight)];
window.bounds = CGRectMake(0, 0, screenWidth, screenHeight);
MainViewController = [[UIViewController alloc] init];
glView = [[EAGLView alloc] initWithFrame:CGRectMake(0, 0, screenWidth, screenHeight)];
glView.bounds = CGRectMake(0, 0, screenWidth, screenHeight);
MainViewController.view = glView;
window.rootViewController = MainViewController;
[window makeKeyAndVisible];
[glView setupGame];
[glView startAnimation];
return YES;
}
I'm building this to run on the Mac, not iOS - which is quit different. I'm almost there with the speedo, but the math of making the needle move up and down the scale as data is input eludes me.
I'm measuring wind speed live, and want to display it as a gauge - speedometer, with the needle moving as the windspeed changes. I have the fundamentals ok. I can also - and will - load the images into holders, but later. For now I want to get it working ...
- (void)drawRect:(NSRect)rect
{
NSRect myRect = NSMakeRect ( 21, 21, 323, 325 ); // set the Graphics class square size to match the guage image
[[NSColor blueColor] set]; // colour it in in blue - just because you can...
NSRectFill ( myRect );
[[NSGraphicsContext currentContext] // set up the graphics context
setImageInterpolation: NSImageInterpolationHigh]; // highres image
//-------------------------------------------
NSSize viewSize = [self bounds].size;
NSSize imageSize = { 320, 322 }; // the actual image rectangle size. You can scale the image here if you like. x and y remember
NSPoint viewCenter;
viewCenter.x = viewSize.width * 0.50; // set the view center, both x & y
viewCenter.y = viewSize.height * 0.50;
NSPoint imageOrigin = viewCenter;
imageOrigin.x -= imageSize.width * 0.50; // set the origin of the first point
imageOrigin.y -= imageSize.height * 0.50;
NSRect destRect;
destRect.origin = imageOrigin; // set the image origin
destRect.size = imageSize; // and size
NSString * file = #"/Users/robert/Documents/XCode Projects/xWeather Graphics/Gauge_mph_320x322.png"; // stuff in the image
NSImage * image = [[NSImage alloc] initWithContentsOfFile:file];
//-------------------------------------------
NSSize view2Size = [self bounds].size;
NSSize image2Size = { 149, 17 }; // the orange needle
NSPoint view2Center;
view2Center.x = view2Size.width * 0.50; // set the view center, both x & y
view2Center.y = view2Size.height * 0.50;
NSPoint image2Origin = view2Center;
//image2Origin.x -= image2Size.width * 0.50; // set the origin of the first point
image2Origin.x = 47;
image2Origin.y -= image2Size.height * 0.50;
NSRect dest2Rect;
dest2Rect.origin = image2Origin; // set the image origin
dest2Rect.size = image2Size; // and size now is needle size
NSString * file2 = #"/Users/robert/Documents/XCode Projects/xWeather Graphics/orange-needle01.png";
NSImage * image2 = [[NSImage alloc] initWithContentsOfFile:file2];
// do image 1
[image setFlipped:YES]; // flip it because everything else is in this exerecise
// do image 2
[image2 setFlipped:YES]; // flip it because everything else is in this exerecise
[image drawInRect: destRect
fromRect: NSZeroRect
operation: NSCompositeSourceOver
fraction: 1.0];
[image2 drawInRect: dest2Rect
fromRect: NSZeroRect
operation: NSCompositeSourceOver
fraction: 1.0];
NSBezierPath * path = [NSBezierPath bezierPathWithRect:destRect]; // draw a red border around the whole thing
[path setLineWidth:3];
[[NSColor redColor] set];
[path stroke];
}
// flip the ocords
- (BOOL) isFlipped { return YES; }
#end
The result is here. The gauge part that is. Now all I have to do is make the needle move in response to input.
Apple has some sample code, called SpeedometerView, which does exactly what you're asking. It'll surely take some doing to adapt it for your use, but it's probably a decent starting point.
I am using NSAffineTransform to rotate/ reflect an NSImage and when using larger images i run into an error:
NSImage: Insufficient memory to allocate pixel data buffer of 4496342739800064 bytes
The image i am transforming here is 6,998,487 bytes at 4110px x 2735. Does NSAffineTransform really need this much memory to do this transformation or am i going wrong somewhere? Heres my rotate code:
-(NSImage *)rotateLeft:(NSImage *)img{
NSImage *existingImage = img;
NSSize existingSize;
existingSize.width = existingImage.size.width;
existingSize.height = existingImage.size.height;
NSSize newSize = NSMakeSize(existingSize.height, existingSize.width);
NSImage *rotatedImage = [[NSImage alloc] initWithSize:newSize];
[rotatedImage lockFocus];
NSAffineTransform *rotateTF = [NSAffineTransform transform];
NSPoint centerPoint = NSMakePoint(newSize.width / 2, newSize.height / 2);
[rotateTF translateXBy: centerPoint.x yBy: centerPoint.y];
[rotateTF rotateByDegrees: 90];
[rotateTF translateXBy: -centerPoint.y yBy: -centerPoint.x];
[rotateTF concat];
NSRect r1 = NSMakeRect(0, 0, newSize.height, newSize.width);
[existingImage drawAtPoint:NSMakePoint(0,0)
fromRect:r1
operation:NSCompositeCopy fraction:1.0];
[rotatedImage unlockFocus];
return rotatedImage;
}
I am using ARC in my project.
Thanks in advance, Ben