So I basically have a bunch of routes (think of 3D positions) and I'd like to draw them in a view.
I haven't actually done anything w/graphics before and was curious how I should even start thinking about doing this in XCode w/my Cocoa app.
Anyone have any suggestions?
Should I subclass NSView? Should I use OpenGL?
Thanks!
Edit: So I was able to subclass NSView, see here: http://groovyape.com/map.png
Is this the best way? (I realize it's only 2D, I'm afraid to try 3D heh). And what if i want to allow people to click on the circles - how can I do this?
In your NSView subclass you can override the various mouse event handlers to do whatever you want:
- (void)mouseDown:(NSEvent *)event;
- (void)mouseDragged:(NSEvent *)event;
- (void)mouseUp:(NSEvent *)event;
Then you'll need a way to know where the circles are so you'll know when they've been clicked in. If you have a circle as an NSPoint and the radius then something like this will work
- (BOOL)isPoint:(NSPoint)click inCircleAtPoint:(NSPoint)centre withRadius:(float)r
{
float dx = click.x - centre.x;
float dy = click.y - centre.y;
// Get the distance from the click to the centre of the circle
float h = hypot (dx, dy);
// is the distance less than the radius?
return (h < r);
}
- (void)mouseDown:(NSEvent *)event
{
NSPoint clickPoint = [event locationInWindow];
if ([self isPoint:clickPoint inCircleAtPoint:mapPoint withRadius:5.0]) {
// Click was in point
}
}
Related
I'm having some trouble achieving the result I need.
Using the method visibleRect of UIScrollView, and using the gesture recognizer, I can get where, in the screen, the user touched, or draw a rect, for example.
Where I'm having some trouble is getting the information of where that touch event is relative to the document shown in the UIScrollView.
So, if I have a document like A4 size or Letter size, and the visible part is the bottom of that document, using the above method I can see the user tapped the top content of the window. But how can I know what that point refers to the document shown?
Use contentOffset to achieve that:
Add scroll offest to x and y touches:
CGFloat xOffset = _myScrollView.contentOffset.x;
CGFloat yOffset = _myScrollView.contentOffset.y;
Then take of it the position of the scrollview:
CGRect frame = _myScrollView.frame;
All:
CGFloat pdfTouchX = screenTouchX - frame.origin.x + xOffset;
CGFloat pdfTouchY = screenTouchY - frame.origin.y + yOffset;
I am making a clock that can be controlled by the user.I have a UIImagview with a name minhandview(minute hand needle of the clock). I want to rotate it under my finger(it should track the finger movement) with a constant rotation of 5 degrees when the user keep on rotating it.I am using the following code for it
- (void) touchesMoved:(NSSet *)_touches withEvent:(UIEvent *)_event
{
UITouch *touch=[[_event allTouches]anyObject];
CGPoint pt = [touch locationInView:self.view];
if([touch view]==minhandview)
{
minhandview.transform = CGAffineTransformRotate(minhandview.transform, Degrees2Radians(5));
}
}
the image is rotating fine 5 degrees everytime this method is called but it is not following my finger(rotating to fast) how can I correct it? Please help. thanks
I'm drawing a rectangle on a custom subclass of NSView which can then be dragged within the borders of the view:
The code for doing this is:
// Get the starting location of the mouse down event.
NSPoint location = [self convertPoint: [event locationInWindow] fromView: nil];
// Break out if this is not within the bounds of the rect.
if (!NSPointInRect(location, [self boundsOfAllControlPoints])) {
return;
}
while (YES) {
// Begin modal mouse tracking, looking for mouse dragged and mouse up events
NSEvent *trackingEvent = [[self window] nextEventMatchingMask:(NSLeftMouseDraggedMask | NSLeftMouseUpMask)];
// Get tracking location and convert it to point in the view.
NSPoint trackingLocation = [self convertPoint:[trackingEvent locationInWindow] fromView:nil];
// Calculate the delta's of x and y compared to the previous point.
long dX = location.x - trackingLocation.x;
long dY = location.y - trackingLocation.y;
// Update all points in the rect
for (int i = 0; i < 4; i++) {
NSPoint newPoint = NSMakePoint(points[i].x - dX, points[i].y - dY);
points[i] = newPoint;
}
NSLog(#"Tracking location x: %f y: %f", trackingLocation.x, trackingLocation.y);
// Set current location as previous location.
location = trackingLocation;
// Ask for a redraw.
[self setNeedsDisplay:YES];
// Stop mouse tracking if a mouse up is received.
if ([trackingEvent type] == NSLeftMouseUp) {
break;
}
}
I basically catch a mouse down event and check whether its location is inside the draggable rect. If it is, I start tracking the movement of the mouse in trackingEvent. I calculate the delta's for x and y coordinates, create new points for the draggable rect, and request a refresh of the views display.
Although it works, it looks a bit "amateurish" as during the drag, the mouse pointer will catch up with the shape being dragged and will eventually cross its borders. In other drag operations one will see the mouse pointer fixed to the position of the object being dragged, from start to finish of the drag operation.
What is causing this effect?
EDIT:
I've changed my approach following Rob's answer and adopted the three method approach:
- (void) mouseDown: (NSEvent*) event {
// There was a mouse down event which might be in the thumbnail rect.
[self setDragStartPoint: [self convertPoint: [event locationInWindow] fromView: nil]];
// Indicate we have a valid start of a drag.
if (NSPointInRect([self dragStartPoint], [self boundsOfAllControlPoints])) {
[self setValidDrag: YES];
}
}
- (void) mouseDragged: (NSEvent *) anEvent {
// Return if a valid drag was not detected during a mouse down event.
if (![self validDrag]) {
return;
}
NSLog(#"Tracking a drag.");
// Get tracking location and convert it to point in the view.
NSPoint trackingLocation = [self convertPoint: [anEvent locationInWindow] fromView: nil];
// Calculate the delta's of x and y compared to the previous point.
long dX = [self dragStartPoint].x - trackingLocation.x;
long dY = [self dragStartPoint].y - trackingLocation.y;
// Update all points in the rect
for (int i = 0; i < 4; i++) {
NSPoint newPoint = NSMakePoint(points[i].x - dX, points[i].y - dY);
points[i] = newPoint;
}
// Ask for a redraw.
[self setNeedsDisplay:YES];
NSLog(#"Tracking location x: %f y: %f", trackingLocation.x, trackingLocation.y);
// Set current location as previous location.
[self setDragStartPoint: trackingLocation];
NSLog(#"Completed mouseDragged method. Allow for repaint.");
}
- (void) mouseUp: (NSEvent *) anEvent {
// End the drag.
[self setValidDrag: NO];
[self setNeedsDisplay: YES];
}
Although the effect is slightly better, there's still a noticeable delay with the rect eventually dragging behind the direction of movement of the mouse pointer. This is especially noticeable when I move the mouse slowly during a drag.
EDIT 2:
Got it. The problem was with calculating the deltas. I used long for that while I should use float. Works great now.
You're holding onto the event loop during the draw, which means that the square never gets to redraw. Your call to setNeedsDisplay: doesn't draw anything. It just flags the view to be redrawn. It can't redraw until this routine returns, which you don't do until the mouse button is released.
Read Handling Mouse Dragging Operations for a full discussion on how to implement dragging in Cocoa. You either need to return from mouseDown: and override mouseDragged: and mouseUp:, or you need to pump the event loop yourself so that the drawing cycle can process.
I tend to recommend the first approach, even though it requires multiple methods. Pumping the event loop can create very surprising bugs and should be used with caution. The most common bugs in my experience are due to delayed selectors firing when you pump the event loop, causing "extra" code to run in the middle of your dragging routine. In some cases, this can cause reentrance and deadlock. (I've had this happen....)
I need to draw lots of polygons 500k to a million on the iPad. After experimenting, I can only get only get 1 fps if that. This is just an example my real code has some good sized polygons.
Here are a few question:
Why don't I have to add the Quartz Framework to my project?
If many of the polygons repeat can I leverage that with views or are they too heavy etc?
Any alternatives, QTPaint can handle this but dips into the gpu. Is there is anything like QT or ios?
Can Opengl increase 2d performance of this type?
Example drawrect:
//X Y Array of boxes
- (void)drawRect:(CGRect)rect
{
int reset = [self pan].x;
int markX = reset;
int markY = [self pan].y;
CGContextRef context = UIGraphicsGetCurrentContext();
for(int i = 0; i < 1000; i++)//1,000,000
{
for(int j = 0; j < 1000; j++)
{
CGContextMoveToPoint(context, markX, markY);
CGContextAddLineToPoint(context, markX, markY + 10);
CGContextAddLineToPoint(context, markX + 10, markY + 10);
CGContextAddLineToPoint(context, markX + 10, markY);
CGContextAddLineToPoint(context, markX, markY);
CGContextStrokePath(context);
markX+=12;
}
markY += 12;
markX = reset;
}
}
The pan just move the array of boxes around on screen with pan gesture. Any help or hints would greatly appreciated.
The key issue with your example is that it is not optimized. Whenever drawRect: is called, the device is rendering all 1,000,000 squares. Worse still, it's making 6,000,000 calls to those APIs in the loop. If you want to refresh this view at even a modest 30fps, that is 180,000,000 calls / second.
With your 'simple' example, the size of the draw area is 12,000px × 12,000px; the maximum area you can display on the iPad's display is 768×1024 (assuming full-screen portrait). Therefore, the code is wasting a lot of CPU resources drawing outside the visible area. UIKit has ways of handling this scenario with relative ease.
When managing content that is significantly larger than the visible area, you should limit drawing to only that which is visible. UIKit has a couple of ways of handing this; UIScrollView in combination with a view backed by a CATiledLayer is your best bet.
Steps:
Disclaimer: This is specifically an optimization of your example code above
Create a new View Based Application iPad project
Add a reference to the QuartzCore.framework
Create a new class, say MyLargeView, subclassed from UIView and add the following code:
:
#import <QuartzCore/QuartzCore.h>
#implementation MyLargeView
- (void)awakeFromNib {
CATiledLayer *tiledLayer = (CATiledLayer *)[self layer];
tiledLayer.tileSize = CGSizeMake(512.0f, 512.0f);
}
// Set the layer's class to be CATiledLayer.
+ (Class)layerClass {
return [CATiledLayer class];
}
- (void)drawRect:(CGRect)rect {
// Drawing code
// only draws what is specified by the rect parameter
CGContextRef context = UIGraphicsGetCurrentContext();
// set up some constants for the objects being drawn
const CGFloat width = 10.0f; // width of rect
const CGFloat height = 10.0f; // height of rect
const CGFloat xSpace = 4.0f; // space between cells (horizontal)
const CGFloat ySpace = 4.0f; // space between cells (vertical)
const CGFloat tWidth = width + xSpace; // total width of cell
const CGFloat tHeight = height + ySpace;// total height of cell
CGFloat xStart = floorf(rect.origin.x / tWidth); // first visible cell (column)
CGFloat yStart = floorf(rect.origin.y / tHeight); // first visible cell (row)
CGFloat xCells = rect.size.width / tWidth + 1; // number of horizontal visible cells
CGFloat yCells = rect.size.height / tHeight + 1; // number of vertical visible cells
for(int x = xStart; x < (xStart + xCells); x++) {
for(int y = yStart; y < (yStart + yCells); y++) {
CGFloat xpos = x*tWidth;
CGFloat ypos = y*tHeight;
CGContextMoveToPoint(context, xpos, ypos);
CGContextAddLineToPoint(context, xpos, ypos + height);
CGContextAddLineToPoint(context, xpos + width, ypos + height);
CGContextAddLineToPoint(context, xpos + width, ypos);
CGContextAddLineToPoint(context, xpos, ypos);
CGContextStrokePath(context);
}
}
}
#end
Edit the view controller nib and add a UIScrollView to the view
Add a UIView to the UIScrollView and make sure it fills the UIScrollView
Change the class to MyLargeView
Set frame size of MyLargeView to 12,000×12,000
Finally, open up the view controller .m file and add the following override:
:
// Implement viewDidLoad to do additional setup after loading the view, typically from a nib.
- (void)viewDidLoad {
[super viewDidLoad];
UIScrollView *scrollView = [self.view.subviews objectAtIndex:0];
scrollView.contentSize = CGSizeMake(12000, 12000);
}
If you look at the drawRect: call, it is only drawing into the area specified by the rect parameter, which will correspond to the tile size (512×512) for the CATiledLayer we configured in the awakeFromNib method. This will scale to a 1,000,000×1,000,000 pixel canvas.
Alternatives to look at are the ScrollViewSuite example, specifically 3_Tiling.
OpenGL is GPU hardware accelerated on iOS devices. Core Graphics drawing is not, and can be many many times slower when dealing with a large number of small graphics primitives (lines).
For lots of small squares, just writing them into a bitmap in C code is faster than Core Graphics line drawing. Then just draw the bitmap to the view once when done. But Open GL would be even faster.
point 4. OpenGL should do that fine. Check if you could reuse those objects and whether you could move some of the logic to GLSL code.
OpenGL performance optimization (in context of WebGL but most of it should apply): http://www.youtube.com/watch?v=rfQ8rKGTVlg
I don't know the details of iOS history so this may not have been an option when the question was first posted. However, I wanted to call out CAShapeLayer as a simple option when dealing with path performance problems. "iOS Core Animation: Advanced Techniques" (find it on Google Books) says CAShapeLayer "uses hardware-accelerated drawing" which I'm taking to mean that it's a GPU-based implementation. The same book has a good usage example in chapter 6, which boils down to this:
Create a CAShapeLayer
Configure its lineWidth, fillColor, strokeColor, etc.
Add the layer as a sublayer of your view's containerView.layer
To draw a path, just set it to the layer's "path" property
This made a gigantic performance difference in my app, as measured by Instruments. If your performance problem is path-based, don't wade into OpenGL before you've tried CAShapeLayer.
I encountered the same problem. After endless searching on google,CAShapeLayer saved me finally! Here is the detail steps you need to do:
Create a view with CAShapeLayer as it's layer type by override UIView's + (Class)layerClass method
Configure the layer's lineWidth, fillColor, strokeColor, etc.
Create an UIBezierPath instance
To draw a path,use UIBezierPath instance to add lines,curve,or acr etc, after you finished drawing, just set bezierPath.CGPath to the
layer's "path" property
Here is a simple demo to draw a simple curve when you touch the demo view:
//Simple ShapelayerView.m
-(instancetype)init {
self = [super init];
if (self) {
_bezierPath = [UIBezierPath bezierPath];
CAShapeLayer *shapeLayer = (CAShapeLayer *)self.layer;
shapeLayer.lineWidth = 5;
shapeLayer.lineJoin = kCALineJoinRound;
shapeLayer.lineCap = kCALineCapRound;
shapeLayer.strokeColor = [UIColor yellowColor].CGColor;
shapeLayer.fillColor = [UIColor blueColor].CGColor;
}
return self;
}
+ (Class)layerClass {
return [CAShapeLayer class];
}
- (void) customDrawShape {
CAShapeLayer *shapeLayer = (CAShapeLayer *)self.layer;
[_bezierPath removeAllPoints];
[_bezierPath moveToPoint:CGPointMake(10, 10)];
[_bezierPath addQuadCurveToPoint:CGPointMake(2, 2) controlPoint:CGPointMake(50, 50)];
shapeLayer.path = _bezierPath.CGPath;
}
- (void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event {
[super touchesBegan:touches withEvent:event];
[self customDrawShape];
}
I've got an NSView (myView) wrapped in an NSScrollView (myScrollView). Using zoom-in/out buttons, the user can alter the scale of myView. If the user is currently scrolled to a particular spot in myView, I'd like to keep that part of the view on-screen after the zooming has taken place.
I've got code that looks like this:
// preserve current position in scrollview
NSRect oldVisibleRect = [[myScrollView contentView] documentVisibleRect];
NSPoint oldCenter = NSPointFromCGPoint(CGPointMake(oldVisibleRect.origin.x + (oldVisibleRect.size.width / 2.0),
oldVisibleRect.origin.y + (oldVisibleRect.size.height / 2.0)));
// adjust my zoom
++displayZoom;
[self scaleUnitSquareToSize:NSSizeFromCGSize(CGSizeMake(0.5, 0.5))];
[self calculateBounds]; // make sure my frame & bounds are at least as big as the visible content view
[self display];
// Adjust scroll view to keep the same position.
NSRect newVisibleRect = [[myScrollView contentView] documentVisibleRect];
NSPoint newOffset = NSPointFromCGPoint(CGPointMake((oldCenter.x * 0.5) - (newVisibleRect.size.width / 2.0),
(oldCenter.y * 0.5) - (newVisibleRect.size.height / 2.0)));
if (newOffset.x < 0)
newOffset.x = 0;
if (newOffset.y < 0)
newOffset.y = 0;
[[myScrollView contentView] scrollToPoint: newOffset];
[myScrollView reflectScrolledClipView: [myScrollView contentView]];
And it seems sort of close, but it's not quite right and I can't figure out what I'm doing wrong. My two questions are:
1) Is there not a built-in something along the lines of:
[myView adjustScaleBy: 0.5 whilePreservingLocationInScrollview:myScrollView];
2) If not, can anyone see what I'm doing wrong in my "long way around" approach, above?
Thanks!
Keeping the same scroll position after scaling isn't easy. One thing you need to decide is what you mean by "the same" - do you want the top, middle, or bottom of the visible area before scaling to stay in place after scaling?
Or, more intuitively, do you want the position that stays in place a percentage down the visible rect equal to the percentage that you are scrolled down the document when you start (eg, so the center of the scroller's thumb doesn't move up or down during a scale, the thumb just grows or shrinks).
If you want the latter effect, one way to do it is get the NSScrollView's verticalScroller and horizontalScroller, and then read their 'floatValue's. These are normalized from 0 to 1, where '0' means you're at the top of the document and 1 means you're at the end. The nice thing about asking the scroller for this is that if the document is shorter than the NSScrollView, the scroller still returns a sane answer in all cases for 'floatValue,' so you don't have to special-case this.
After you resize, set the NSScrollView's scroll position to be the same percentage it was before the scale - but, sadly, here's where I wave my hands a little bit. I haven't done this in a while in my code, but as I recall you can't just set the NSScrollers' 'floatValue's directly - they'll LOOK scrolled, but they won't actually affect the NSScrollView.
So, you'll have to write some math to calculate the new top-left point in your document based on the percentage you want to be through it - on the y axis, for instance, it'll look like, "If the document is now shorter than the scrollView's contentView, scroll to point 0, otherwise scroll to a point that's ((height of contentView - height of documentView) * oldVerticalPercentage) down the document." X axis is of course similar.
Also, I'm almost positive you don't need a call to -display here, and in general shouldn't ever call it, ever. (-setNeedsDisplay: at most.)
-Wil
Me thinks you like to type too much… ;-)
// instead of this:
NSPoint oldCenter = NSPointFromCGPoint(CGPointMake(oldVisibleRect.origin.x +
(oldVisibleRect.size.width / 2.0),
// use this:
NSPoint oldCenter = NSMakePoint(NSMidX(oldVisibleRect), NSMaxY(oldVisibleRect));
// likewise instead of this:
[self scaleUnitSquareToSize:NSSizeFromCGSize(CGSizeMake(0.5, 0.5))];
// use this:
[self scaleUnitSquareToSize:NSMakeSize(0.5, 0.5)];
// and instead of this
NSPoint newOffset = NSPointFromCGPoint(CGPointMake(
(oldCenter.x * 0.5) - (newVisibleRect.size.width / 2.0),
(oldCenter.y * 0.5) - (newVisibleRect.size.height / 2.0)));
// use this:
NSPoint newOffset NSMakePoint(
(oldCenter.x - NSWidth(newVisibleRect)) / 2.f,
(oldCenter.y - NSHeight(newVisibleRect)) / 2.f);
This is an old question, but I hope someone looking for this finds my answer useful...
float zoomFactor = 1.3;
-(void)zoomIn
{
NSRect visible = [scrollView documentVisibleRect];
NSRect newrect = NSInsetRect(visible, NSWidth(visible)*(1 - 1/zoomFactor)/2.0, NSHeight(visible)*(1 - 1/zoomFactor)/2.0);
NSRect frame = [scrollView.documentView frame];
[scrollView.documentView scaleUnitSquareToSize:NSMakeSize(zoomFactor, zoomFactor)];
[scrollView.documentView setFrame:NSMakeRect(0, 0, frame.size.width * zoomFactor, frame.size.height * zoomFactor)];
[[scrollView documentView] scrollPoint:newrect.origin];
}
-(void)zoomOut
{
NSRect visible = [scrollView documentVisibleRect];
NSRect newrect = NSOffsetRect(visible, -NSWidth(visible)*(zoomFactor - 1)/2.0, -NSHeight(visible)*(zoomFactor - 1)/2.0);
NSRect frame = [scrollView.documentView frame];
[scrollView.documentView scaleUnitSquareToSize:NSMakeSize(1/zoomFactor, 1/zoomFactor)];
[scrollView.documentView setFrame:NSMakeRect(0, 0, frame.size.width / zoomFactor, frame.size.height / zoomFactor)];
[[scrollView documentView] scrollPoint:newrect.origin];
}