I have an UIImageView that I am moving around a circle with CGAffineTransformRotate. Works great! But when the user press a stop bottom I would like to the the actual x- / y- position of the UIImageView. So far I am always getting the original x- / y- values from when the UIImageView was created.
Is there a way to get the actual position, when the user stopped the rotation?
I have found the solution and share it in case someone is running a similar case:
From UIBezierPath I use the bounds information and this give me the position where the UIImageView stopped. Here the code:
UIBezierPath *path = [[UIBezierPath alloc] init];
[path addArcWithCenter:CGPointMake(iMiddleX, iMiddleY) radius:flR startAngle:degreesToRadians(flDegrees-0.01) endAngle:degreesToRadians(flDegrees) clockwise:YES];
CAKeyframeAnimation *pathAnimation = [CAKeyframeAnimation animationWithKeyPath:#"position"];
pathAnimation.calculationMode = kCAAnimationPaced;
pathAnimation.fillMode = kCAFillModeForwards;
pathAnimation.removedOnCompletion = NO;
pathAnimation.repeatCount = 1;
pathAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionDefault];
pathAnimation.duration = 1.0;
pathAnimation.path = path.CGPath;
NSInteger iX = path.bounds.origin.x;
NSInteger iY = path.bounds.origin.y;
Related
It's been a while, as I was hospitalized for 3 months after a motorcycle accident.
So I just got to renew my apple programming subscription :-)
I have another question that has been on my mind for quite some time.
In my iPad application I draw a triangle in the center of an iPad like this:
- (void)initTriangle
{
CGRect screenBound = [[UIScreen mainScreen] bounds];
CGSize screenSize = screenBound.size;
CGFloat screenWidth = screenSize.width;
CGFloat screenHeight = screenSize.height;
// draw triangle (TRIANGLE)
CGMutablePathRef path = CGPathCreateMutable();
CGPathMoveToPoint(path,NULL, 0.5*screenWidth, 0.5*screenHeight-25);
CGPathAddLineToPoint(path, NULL, 0.5*screenWidth-25, 0.5*screenHeight+25);
CGPathAddLineToPoint(path, NULL, 0.5*screenWidth+25, 0.5*screenHeight+25);
CGPathAddLineToPoint(path, NULL, 0.5*screenWidth, 0.5*screenHeight-25);
CAShapeLayer *triangle = [CAShapeLayer layer];
[triangle setPath:path];
[triangle setFillColor:[[UIColor blackColor] CGColor]];
[[[self view] layer] addSublayer:triangle];
CGPathRelease(path);
}
And I call this from my viewDidLoad like this:
[self initTriangle];
Now I'm trying to rotate this triangle with the rotation of my iPad around Z-Axis while laying flat on the table. I have a function that gives me the yaw readings in float and I'm calling my
-(void)updateTriangleWithYaw:(float)yaw
method, but I don't know what to exactly put in there to make it rotate.
here is what my method looks like so far:
-(void)updateTriangleWithYaw:(float)yaw
{
CGRect screenBound = [[UIScreen mainScreen] bounds];
CGSize screenSize = screenBound.size;
CGFloat screenWidth = screenSize.width;
CGFloat screenHeight = screenSize.height;
NSLog(#"YAW: %f", yaw);
Z += 2 * yaw;
Z *= 0.8;
CGFloat newR = R + 10 * yaw;
self.triangle.frame = CGRectMake(0.5*screenWidth, 0.5*screenHeight, newR, newR);
}
Any help will be greatly appreciated!
Thanks and be safe guys!!
You should set the layer's affineTransform. You can apply a rotation transform like:
[self.triangle setAffineTransform:CGAffineTransformMakeRotation(yaw)];
This method, setAffineTransform is a convenience to set the transform property of the layer, which is a more general type of transform CATransform3D. You can also set the transform of the layer directly, and if you want to do that you can make a rotation about the z-axis like:
self.triangle.transform = CATransform3DMakeRotation(yaw, 0, 0, 1);
In this case the first argument is the angle (in radians) and the last three arguments specify the axis of rotation.
Note that you should not assign or depend on the value of the frame property of a layer whose transform is not the identity (CGAffineTransformIdentity). When you use the transform property you should set the size and position of your layer by assigning the layer's center and bounds properties, and similarly you should read the center and bounds when you want to find out information about the layer's position and size.
I'm building this to run on the Mac, not iOS - which is quit different. I'm almost there with the speedo, but the math of making the needle move up and down the scale as data is input eludes me.
I'm measuring wind speed live, and want to display it as a gauge - speedometer, with the needle moving as the windspeed changes. I have the fundamentals ok. I can also - and will - load the images into holders, but later. For now I want to get it working ...
- (void)drawRect:(NSRect)rect
{
NSRect myRect = NSMakeRect ( 21, 21, 323, 325 ); // set the Graphics class square size to match the guage image
[[NSColor blueColor] set]; // colour it in in blue - just because you can...
NSRectFill ( myRect );
[[NSGraphicsContext currentContext] // set up the graphics context
setImageInterpolation: NSImageInterpolationHigh]; // highres image
//-------------------------------------------
NSSize viewSize = [self bounds].size;
NSSize imageSize = { 320, 322 }; // the actual image rectangle size. You can scale the image here if you like. x and y remember
NSPoint viewCenter;
viewCenter.x = viewSize.width * 0.50; // set the view center, both x & y
viewCenter.y = viewSize.height * 0.50;
NSPoint imageOrigin = viewCenter;
imageOrigin.x -= imageSize.width * 0.50; // set the origin of the first point
imageOrigin.y -= imageSize.height * 0.50;
NSRect destRect;
destRect.origin = imageOrigin; // set the image origin
destRect.size = imageSize; // and size
NSString * file = #"/Users/robert/Documents/XCode Projects/xWeather Graphics/Gauge_mph_320x322.png"; // stuff in the image
NSImage * image = [[NSImage alloc] initWithContentsOfFile:file];
//-------------------------------------------
NSSize view2Size = [self bounds].size;
NSSize image2Size = { 149, 17 }; // the orange needle
NSPoint view2Center;
view2Center.x = view2Size.width * 0.50; // set the view center, both x & y
view2Center.y = view2Size.height * 0.50;
NSPoint image2Origin = view2Center;
//image2Origin.x -= image2Size.width * 0.50; // set the origin of the first point
image2Origin.x = 47;
image2Origin.y -= image2Size.height * 0.50;
NSRect dest2Rect;
dest2Rect.origin = image2Origin; // set the image origin
dest2Rect.size = image2Size; // and size now is needle size
NSString * file2 = #"/Users/robert/Documents/XCode Projects/xWeather Graphics/orange-needle01.png";
NSImage * image2 = [[NSImage alloc] initWithContentsOfFile:file2];
// do image 1
[image setFlipped:YES]; // flip it because everything else is in this exerecise
// do image 2
[image2 setFlipped:YES]; // flip it because everything else is in this exerecise
[image drawInRect: destRect
fromRect: NSZeroRect
operation: NSCompositeSourceOver
fraction: 1.0];
[image2 drawInRect: dest2Rect
fromRect: NSZeroRect
operation: NSCompositeSourceOver
fraction: 1.0];
NSBezierPath * path = [NSBezierPath bezierPathWithRect:destRect]; // draw a red border around the whole thing
[path setLineWidth:3];
[[NSColor redColor] set];
[path stroke];
}
// flip the ocords
- (BOOL) isFlipped { return YES; }
#end
The result is here. The gauge part that is. Now all I have to do is make the needle move in response to input.
Apple has some sample code, called SpeedometerView, which does exactly what you're asking. It'll surely take some doing to adapt it for your use, but it's probably a decent starting point.
I need to get an NSView's frame/bounds relative to the screen. In other words I need the x and y coordinates to be the position on the screen not the position relative to it's superview.
I've come up with the following solution based on comments.
NSRect frameRelativeToWindow = [self.view
convertRect:self.view.bounds toView:nil
];
#if MAC_OS_X_VERSION_MAX_ALLOWED > MAC_OS_X_VERSION_10_6
NSPoint pointRelativeToScreen = [self.view.window
convertRectToScreen:frameRelativeToWindow
].origin;
#else
NSPoint pointRelativeToScreen = [self.view.window
convertBaseToScreen:frameRelativeToWindow.origin
];
#endif
NSRect frame = self.view.frame;
frame.origin.x = pointRelativeToScreen.x;
frame.origin.y = pointRelativeToScreen.y;
NSRect frameRelativeToWindow = [myView convertRect:myView.bounds toView:nil];
NSRect frameRelativeToScreen = [myView.window convertRectToScreen:frameInWindow];
I have such a problem in my cocos2d application in landscape mode:
When I'm adding a new object inherited from CCNode and add it to layer and after that layer to screen. The maximum position must be 480 on X and 320 on Y in landscape mode. But my object is in position 480 on X when it coordinates are on (220, 0). Anybody know how to solve this problem?? Thanx!
- (void) applicationDidFinishLaunching:(UIApplication*)application
{
CCScene *scene = [CCScene node];
CCLayer *layer = [CCLayer node];
//layer.anchorPoint = ccp(1, 1);
//layer.contentSize = CGSizeMake(480, 320);
CCSprite *sp = [CCSprite spriteWithFile:#"fon.png"];
[layer addChild: sp];
[scene addChild: layer];
[[CCDirector sharedDirector] runWithScene: scene];
}
I think you are misunderstanding the co-ordinates.
x and y are the same on the phone, regardless of orientation.
When the phone is on its side, x = y and y = x.
Try this
player1 = [[Player alloc] initWithPosition: CGPointMake(20, 200) )];
Greetings,
I'm trying to draw a circle on a map. all the separate pieces of this project work independently but when I put them all together it breaks.
I setup my UI in my viewDidLoad, retaining most of it.
I then use touch events to call a my refresh map method:
-(void)refreshMap{
NSString *thePath = [NSString stringWithFormat:#"http://maps.google.com/staticmap?center=%f,%f&zoom=%i&size=640x640&maptype=hybrid",viewLatitude, viewLongitude, zoom];
NSURL *url = [NSURL URLWithString:thePath];
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *mapImage = [[UIImage alloc] initWithData:data];
mapImage = [self addCircle:(mapImage) influence:(70) latCon:(320) lonCon:(320)];
NSLog(#"-- mapimageview retaincount %i",[mapImage retainCount]);
mapImageView.image = mapImage;
[mapImage release];}
Setup like this it will load the map with a circle once, but if the map is refreshed again it crashes.
If I comment out the mapImage release it works repeatedly but causes a memory leak.
The addCircle method I'm using:
-(UIImage *)addCircle:(UIImage *)img radius:(CGFloat)radius latCon:(CGFloat)lat lonCon:(CGFloat)lon{
int w = img.size.width;
int h = img.size.height;
lon = h - lon;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst);
//draw the circle
CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage);
CGRect leftOval = {lat- radius/2, lon - radius/2, radius, radius};
CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 0.3);
CGContextAddEllipseInRect(context, leftOval);
CGContextFillPath(context);
CGImageRef imageMasked = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return [UIImage imageWithCGImage:imageMasked];}
Any insight/advise is greatly appreciated!
UIImage *mapImage = [[UIImage alloc] initWithData:data];
mapImage = [self addCircle:(mapImage) influence:(70) latCon:(320) lonCon:(320)];
That's not good. You're losing the reference to the contents of mapImage when you reassign it on the second line. The easiest way to fix this if probably to just add an additional variable, so you can keep track of both images.