Core Graphics: Drawing along a path with a normal gradient - cocoa

There are a number of resources here and elsewhere on the web regarding how to draw with a gradient - fill or stroke.
However, AFAICT, none addresses the following requirement: how to draw a path with a normal gradient, whereby normal means orthogonal to the path. The net effect could be something like toothpaste or a tube when applied with a dark->light->dark linear gradient. Here is this idea in the case of a round rectangle:
round-rect tube http://muys.net/cadre_blanc.png
(this was hand drawn and the corners are not very good).
In the specific case of the round rect, I think I can achieve this effect with 4 linear gradients (the sides) and 4 radial gradients (the corners). But is there better?
Is there an easy solution for any path?

The only "easy" solution I can think of would be to stroke the path multiple times, reducing the stroke width and changing the color slightly each time, to simulate a gradient.
Obviously, this could be an expensive operation for complex paths so you would want to cache the result if possible.
#define RKRandom(x) (arc4random() % ((NSUInteger)(x) + 1))
#implementation StrokeView
- (void)drawRect:(NSRect)rect
{
NSRect bounds = self.bounds;
//first draw using Core Graphics calls
CGContextRef c = [[NSGraphicsContext currentContext] graphicsPort];
CGMutablePathRef path = CGPathCreateMutable();
CGPathMoveToPoint(path, NULL, NSMidX(bounds), NSMidY(bounds));
CGContextSetMiterLimit(c,90.0);
CGContextSetLineJoin(c, kCGLineJoinRound);
CGContextSetLineCap(c, kCGLineCapRound);
for(NSUInteger f = 0; f < 20; f++)
{
CGPathAddCurveToPoint(
path,
NULL,
(CGFloat)RKRandom((NSInteger)NSWidth(bounds)) + NSMinX(bounds),
(CGFloat)RKRandom((NSInteger)NSHeight(bounds)) + NSMinY(bounds),
(CGFloat)RKRandom((NSInteger)NSWidth(bounds)) + NSMinX(bounds),
(CGFloat)RKRandom((NSInteger)NSHeight(bounds)) + NSMinY(bounds),
(CGFloat)RKRandom((NSInteger)NSWidth(bounds)) + NSMinX(bounds),
(CGFloat)RKRandom((NSInteger)NSHeight(bounds)) + NSMinY(bounds)
);
}
for(NSInteger i = 0; i < 8; i+=2)
{
CGContextSetLineWidth(c, 8.0 - (CGFloat)i);
CGFloat tint = (CGFloat)i * 0.15;
CGContextSetRGBStrokeColor (
c,
1.0,
tint,
tint,
1.0
);
CGContextAddPath(c, path);
CGContextStrokePath(c);
}
CGPathRelease(path);
//now draw using Cocoa drawing
NSBezierPath* cocoaPath = [NSBezierPath bezierPathWithRoundedRect:NSInsetRect(self.bounds, 20.0, 20.0) xRadius:10.0 yRadius:10.0];
for(NSInteger i = 0; i < 8; i+=2)
{
[cocoaPath setLineWidth:8.0 - (CGFloat)i];
CGFloat tint = (CGFloat)i * 0.15;
NSColor* color = [NSColor colorWithCalibratedRed:tint green:tint blue:1.0 alpha:1.0];
[color set];
[cocoaPath stroke];
}
}
#end

Related

How is transparency achieved in cocoa applications

I am trying to understand how is transparency actually implemented in cocoa applications. I was expecting the standard blending equation to be used i.e.
BlendedColour = alpha * layerColour + (1-alpha)*backgroundColour
However, I noticed that there is the slight difference in the blended colour expected if the above equation is used. To verify it, I did a small experiment as follows:
1.) Created a window, added a transparency of 0.8 to the window and grabbed a screenshot.
2.) I took a screenshot of the part of the screen where I am overlaying the window in step one without the window and overlayed the same image as in step 1, using the equation mentioned above. (I used openCV for that).
There is a slight difference in the colours for the two images, if you look closely. I wanted to understand what is causing the difference.
Resources:
1.) Images from Step 1 and Step2 respectively
2.) Code used in step 1
NSRect windowRect = {0,0,200,200};
m_NSWindow = [[NSWindow alloc] initWithContentRect:windowRect styleMask:NSBorderlessWindowMask backing:NSBackingStoreBuffered defer:NO];
[m_NSWindow setTitle:#"overlayWindow"];
[m_NSWindow makeKeyAndOrderFront:nil];
g_imageView = [[NSImageView alloc] initWithFrame:NSMakeRect(0,0,200,200)];
[m_NSWindow.contentView addSubview:g_imageView];
[m_NSWindow setOpaque:NO];
[m_NSWindow setAlphaValue:0.8];
NSBitmapImageRep* imageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:nil
pixelsWide:200
pixelsHigh:200
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bitmapFormat:NSAlphaNonpremultipliedBitmapFormat
bytesPerRow:(200*4)
bitsPerPixel:32];
memcpy(imageRep.bitmapData,m_paintBuffer.data,160000);
NSSize imageSize = NSMakeSize(200,200);
NSImage* myImage = [[NSImage alloc] initWithSize: imageSize];
[myImage addRepresentation:imageRep];
[g_imageView setImage:myImage];
4.) Code for step 2
void overlayImage(const cv::Mat &background, const cv::Mat &foreground,
cv::Mat &output, cv::Point2i location)
{
background.copyTo(output);
// start at the row indicated by location, or at row 0 if location.y is negative.
for (int y = max(location.y, 0); y < background.rows; ++y)
{
int fY = y - location.y; // because of the translation
// we are done of we have processed all rows of the foreground image.
if (fY >= foreground.rows)
break;
// start at the column indicated by location,
// or at column 0 if location.x is negative.
for (int x = max(location.x, 0); x < background.cols; ++x)
{
int fX = x - location.x; // because of the translation.
// we are done with this row if the column is outside of the foreground image.
if (fX >= foreground.cols)
break;
// determine the opacity of the foregrond pixel, using its fourth (alpha) channel.
double opacity =
((double)foreground.data[fY * foreground.step + fX * foreground.channels() + 3])
/ 255.;
// and now combine the background and foreground pixel, using the opacity,
// but only if opacity > 0.
for (int c = 0; opacity > 0 && c < output.channels(); ++c)
{
unsigned char foregroundPx =
foreground.data[fY * foreground.step + fX * foreground.channels() + c];
unsigned char backgroundPx =
background.data[y * background.step + x * background.channels() + c];
output.data[y*output.step + output.channels()*x + c] =
backgroundPx * (1. - opacity) + foregroundPx * opacity;
}
}
}
}

draw uiimage along CGMutablePathRef

How do i draw a custom uiimage along a CGMutablePathRef ? I can get the points from CGMutablePathRef but it does not give the smooth points that create the path.
I want to know if i can extract all of them plus the one that creat the smooth path.
i've used CGPathApply but i only get the control points, and when i draw my image it does not stay smooth as the original CGMutablePathRef
void pathFunction(void *info, const CGPathElement *element){
if (element->type == kCGPathElementAddQuadCurveToPoint)
{
CGPoint firstPoint = element->points[1];
CGPoint lastPoint = element->points[0];
UIImage *tex = [UIImage imageNamed:#"myimage.png"];
CGPoint vector = CGPointMake(lastPoint.x - firstPoint.x, lastPoint.y - firstPoint.y);
CGFloat distance = hypotf(vector.x, vector.y);
vector.x /= distance;
vector.y /= distance;
for (CGFloat i = 0; i < distance; i += 1.0f) {
CGPoint p = CGPointMake(firstPoint.x + i * vector.x, firstPoint.y + i * vector.y);
[tex drawAtPoint:p blendMode:kCGBlendModeNormal alpha:1.0f];
}
}
}
It seems like you are looking for the function that is used to draw a cubic Bézier curve from a start point and an end point and two control points.
start⋅(1-t)^3 + 3⋅c1⋅t(1-t)^2 + 3⋅c2⋅t^2(1-t) + end⋅t^3
By setting a value for t between 0 and 1 you will get a point on the curve at a certain percentage of the curve length. I have a short description of how it works in the end of this blog post.
Update
To find the point to draw the image somewhere between the start and end points you pick a t (for example 0.36 and use it to calculate the x and y value of that points.
CGPoint start, end, c1, c2; // set to some value of course
CGFloat t = 0.36;
CGFloat x = start.x*pow((1-t),3) + 3*c1.x*t*pow((1-t),2) + 3*c2.x*pow(t,2)*(1-t) + end.x*pow(t,3);
CGFloat y = start.y*pow((1-t),3) + 3*c1.y*t*pow((1-t),2) + 3*c2.y*pow(t,2)*(1-t) + end.y*pow(t,3);
CGPoint point = CGPointMake(x,y); // this is 36% along the line of the curve
Which given the path in the image would correspond to the orange circle
If you do this for many points along the curve you will have many images positioned along the curve.
Update 2
You are missing that kCGPathElementAddQuadCurveToPoint (implicitly) has 3 points: start (the current/previous points, the control point (points[0]) and the end point (points[1]). For a quad curve both control points are the same so c1 = c2;. For kCGPathElementAddCurveToPoint you would get 2 different control points.

Precise pixel grid overlay in Core Graphics?

In my experiments with creating a pixel-centered image editor I've been trying to draw a precise grid overlay to help guide users when trying to access certain pixels. However, the grid I draw isn't very even, especially at smaller sizes. It's a regular pattern of one slightly larger column for every few normal columns, so I think it's a rounding issue, but I can't see it in my code. Here's my code:
- (void)drawRect:(NSRect)dirtyRect
{
context = [[NSGraphicsContext currentContext] graphicsPort];
CGContextAddRect(context, NSRectToCGRect(self.bounds));
CGContextSetRGBStrokeColor(context, 1.0f, 0.0f, 0.0f, 1.0f);
CGContextStrokePath(context);
CGContextSetInterpolationQuality(context, kCGInterpolationNone);
CGContextSetShouldAntialias(context, NO);
if (image)
{
NSRect imageRect = NSZeroRect;
imageRect.size = CGImageGetSize([image CGImage]);
drawRect = [self bounds];
NSRect viewRect = drawRect;
CGFloat aspectRatio = imageRect.size.width / imageRect.size.height;
if (viewRect.size.width / viewRect.size.height <= aspectRatio)
{
drawRect.size.width = viewRect.size.width;
drawRect.size.height = imageRect.size.height * (viewRect.size.width / imageRect.size.width);
}
else
{
drawRect.size.height = viewRect.size.height;
drawRect.size.width = imageRect.size.width * (viewRect.size.height / imageRect.size.height);
}
drawRect.origin.x += (viewRect.size.width - drawRect.size.width) / 2.0;
drawRect.origin.y += (viewRect.size.height - drawRect.size.height) / 2.0;
CGContextDrawImage(context, drawRect, [image CGImage]);
if (showPixelGrid)
{
//Draw grid by creating start and end points for vertical and horizontal lines.
//FIXME: Grid is uneven, especially at smaller sizes.
CGContextSetStrokeColorWithColor(context, CGColorGetConstantColor(kCGColorBlack));
CGContextAddRect(context, drawRect);
CGContextStrokePath(context);
NSUInteger numXPoints = (NSUInteger)imageRect.size.width * 2;
NSUInteger numYPoints = (NSUInteger)imageRect.size.height * 2;
CGPoint xPoints[numXPoints];
CGPoint yPoints[numYPoints];
CGPoint startPoint;
CGPoint endPoint;
CGFloat widthRatio = drawRect.size.width / imageRect.size.width;
CGFloat heightRatio = drawRect.size.height / imageRect.size.height;
startPoint.x = drawRect.origin.x;
startPoint.y = drawRect.origin.y;
endPoint.x = drawRect.origin.x;
endPoint.y = drawRect.size.height + drawRect.origin.y;
for (NSUInteger i = 0; i < numXPoints; i += 2)
{
startPoint.x += widthRatio;
endPoint.x += widthRatio;
xPoints[i] = startPoint;
xPoints[i + 1] = endPoint;
}
startPoint.x = drawRect.origin.x;
startPoint.y = drawRect.origin.y;
endPoint.x = drawRect.size.width + drawRect.origin.x;
endPoint.y = drawRect.origin.y;
for (NSUInteger i = 0; i < numYPoints; i += 2)
{
startPoint.y += heightRatio;
endPoint.y += heightRatio;
yPoints[i] = startPoint;
yPoints[i + 1] = endPoint;
}
CGContextStrokeLineSegments(context, xPoints, numXPoints);
CGContextStrokeLineSegments(context, yPoints, numYPoints);
}
}
}
Any ideas?
UPDATE: I managed to get your code running with a few tweaks - where did CGImageGetSize() come from? - and I can't really see the problem, other than columns aren't all exactly even at extremely small sizes. That's just how it has to work though. The only way around this is to either fix scaling to be integer multiples of the image size - in other words, get the largest integer multiple of the image size smaller than the view size -or reduce the number of lines drawn on the screen at very small sizes to get rid of this artefact. There's a reason the pixel grid only becomes visible when you zoom in a long way in most editors. Not to mention that if the grid is still visible at 3-4x resolution you're making the view just way too busy.
I couldn't run the code you provided because there's a bunch of class ivars in there, but from a cursory glance, I'd say it has something to do with drawing on pixel boundaries. After you round to an integer to get rid of fuzzy AA artefacts (I notice you turned AA off, but ideally you shouldn't have to do that), you then need to add 0.5 to your origin to get your line drawn in the center of the pixel rather than on the boundary.
Like this:
+---X---+---+---+---+---+
| | | | Y | | |
+---+---+---+---+---+---+
X : CGPoint (1, 1)
Y : CGPoint (3.5, 0.5)
You want to draw from the center of the pixel, because otherwise your line straddles two pixels.
In other words, where you're setting up xPoints and yPoints, make sure to floor() or round() your values, and then add 0.5.

drawRect performance

I need to draw lots of polygons 500k to a million on the iPad. After experimenting, I can only get only get 1 fps if that. This is just an example my real code has some good sized polygons.
Here are a few question:
Why don't I have to add the Quartz Framework to my project?
If many of the polygons repeat can I leverage that with views or are they too heavy etc?
Any alternatives, QTPaint can handle this but dips into the gpu. Is there is anything like QT or ios?
Can Opengl increase 2d performance of this type?
Example drawrect:
//X Y Array of boxes
- (void)drawRect:(CGRect)rect
{
int reset = [self pan].x;
int markX = reset;
int markY = [self pan].y;
CGContextRef context = UIGraphicsGetCurrentContext();
for(int i = 0; i < 1000; i++)//1,000,000
{
for(int j = 0; j < 1000; j++)
{
CGContextMoveToPoint(context, markX, markY);
CGContextAddLineToPoint(context, markX, markY + 10);
CGContextAddLineToPoint(context, markX + 10, markY + 10);
CGContextAddLineToPoint(context, markX + 10, markY);
CGContextAddLineToPoint(context, markX, markY);
CGContextStrokePath(context);
markX+=12;
}
markY += 12;
markX = reset;
}
}
The pan just move the array of boxes around on screen with pan gesture. Any help or hints would greatly appreciated.
The key issue with your example is that it is not optimized. Whenever drawRect: is called, the device is rendering all 1,000,000 squares. Worse still, it's making 6,000,000 calls to those APIs in the loop. If you want to refresh this view at even a modest 30fps, that is 180,000,000 calls / second.
With your 'simple' example, the size of the draw area is 12,000px × 12,000px; the maximum area you can display on the iPad's display is 768×1024 (assuming full-screen portrait). Therefore, the code is wasting a lot of CPU resources drawing outside the visible area. UIKit has ways of handling this scenario with relative ease.
When managing content that is significantly larger than the visible area, you should limit drawing to only that which is visible. UIKit has a couple of ways of handing this; UIScrollView in combination with a view backed by a CATiledLayer is your best bet.
Steps:
Disclaimer: This is specifically an optimization of your example code above
Create a new View Based Application iPad project
Add a reference to the QuartzCore.framework
Create a new class, say MyLargeView, subclassed from UIView and add the following code:
:
#import <QuartzCore/QuartzCore.h>
#implementation MyLargeView
- (void)awakeFromNib {
CATiledLayer *tiledLayer = (CATiledLayer *)[self layer];
tiledLayer.tileSize = CGSizeMake(512.0f, 512.0f);
}
// Set the layer's class to be CATiledLayer.
+ (Class)layerClass {
return [CATiledLayer class];
}
- (void)drawRect:(CGRect)rect {
// Drawing code
// only draws what is specified by the rect parameter
CGContextRef context = UIGraphicsGetCurrentContext();
// set up some constants for the objects being drawn
const CGFloat width = 10.0f; // width of rect
const CGFloat height = 10.0f; // height of rect
const CGFloat xSpace = 4.0f; // space between cells (horizontal)
const CGFloat ySpace = 4.0f; // space between cells (vertical)
const CGFloat tWidth = width + xSpace; // total width of cell
const CGFloat tHeight = height + ySpace;// total height of cell
CGFloat xStart = floorf(rect.origin.x / tWidth); // first visible cell (column)
CGFloat yStart = floorf(rect.origin.y / tHeight); // first visible cell (row)
CGFloat xCells = rect.size.width / tWidth + 1; // number of horizontal visible cells
CGFloat yCells = rect.size.height / tHeight + 1; // number of vertical visible cells
for(int x = xStart; x < (xStart + xCells); x++) {
for(int y = yStart; y < (yStart + yCells); y++) {
CGFloat xpos = x*tWidth;
CGFloat ypos = y*tHeight;
CGContextMoveToPoint(context, xpos, ypos);
CGContextAddLineToPoint(context, xpos, ypos + height);
CGContextAddLineToPoint(context, xpos + width, ypos + height);
CGContextAddLineToPoint(context, xpos + width, ypos);
CGContextAddLineToPoint(context, xpos, ypos);
CGContextStrokePath(context);
}
}
}
#end
Edit the view controller nib and add a UIScrollView to the view
Add a UIView to the UIScrollView and make sure it fills the UIScrollView
Change the class to MyLargeView
Set frame size of MyLargeView to 12,000×12,000
Finally, open up the view controller .m file and add the following override:
:
// Implement viewDidLoad to do additional setup after loading the view, typically from a nib.
- (void)viewDidLoad {
[super viewDidLoad];
UIScrollView *scrollView = [self.view.subviews objectAtIndex:0];
scrollView.contentSize = CGSizeMake(12000, 12000);
}
If you look at the drawRect: call, it is only drawing into the area specified by the rect parameter, which will correspond to the tile size (512×512) for the CATiledLayer we configured in the awakeFromNib method. This will scale to a 1,000,000×1,000,000 pixel canvas.
Alternatives to look at are the ScrollViewSuite example, specifically 3_Tiling.
OpenGL is GPU hardware accelerated on iOS devices. Core Graphics drawing is not, and can be many many times slower when dealing with a large number of small graphics primitives (lines).
For lots of small squares, just writing them into a bitmap in C code is faster than Core Graphics line drawing. Then just draw the bitmap to the view once when done. But Open GL would be even faster.
point 4. OpenGL should do that fine. Check if you could reuse those objects and whether you could move some of the logic to GLSL code.
OpenGL performance optimization (in context of WebGL but most of it should apply): http://www.youtube.com/watch?v=rfQ8rKGTVlg
I don't know the details of iOS history so this may not have been an option when the question was first posted. However, I wanted to call out CAShapeLayer as a simple option when dealing with path performance problems. "iOS Core Animation: Advanced Techniques" (find it on Google Books) says CAShapeLayer "uses hardware-accelerated drawing" which I'm taking to mean that it's a GPU-based implementation. The same book has a good usage example in chapter 6, which boils down to this:
Create a CAShapeLayer
Configure its lineWidth, fillColor, strokeColor, etc.
Add the layer as a sublayer of your view's containerView.layer
To draw a path, just set it to the layer's "path" property
This made a gigantic performance difference in my app, as measured by Instruments. If your performance problem is path-based, don't wade into OpenGL before you've tried CAShapeLayer.
I encountered the same problem. After endless searching on google,CAShapeLayer saved me finally! Here is the detail steps you need to do:
Create a view with CAShapeLayer as it's layer type by override UIView's + (Class)layerClass method
Configure the layer's lineWidth, fillColor, strokeColor, etc.
Create an UIBezierPath instance
To draw a path,use UIBezierPath instance to add lines,curve,or acr etc, after you finished drawing, just set bezierPath.CGPath to the
layer's "path" property
Here is a simple demo to draw a simple curve when you touch the demo view:
//Simple ShapelayerView.m
-(instancetype)init {
self = [super init];
if (self) {
_bezierPath = [UIBezierPath bezierPath];
CAShapeLayer *shapeLayer = (CAShapeLayer *)self.layer;
shapeLayer.lineWidth = 5;
shapeLayer.lineJoin = kCALineJoinRound;
shapeLayer.lineCap = kCALineCapRound;
shapeLayer.strokeColor = [UIColor yellowColor].CGColor;
shapeLayer.fillColor = [UIColor blueColor].CGColor;
}
return self;
}
+ (Class)layerClass {
return [CAShapeLayer class];
}
- (void) customDrawShape {
CAShapeLayer *shapeLayer = (CAShapeLayer *)self.layer;
[_bezierPath removeAllPoints];
[_bezierPath moveToPoint:CGPointMake(10, 10)];
[_bezierPath addQuadCurveToPoint:CGPointMake(2, 2) controlPoint:CGPointMake(50, 50)];
shapeLayer.path = _bezierPath.CGPath;
}
- (void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event {
[super touchesBegan:touches withEvent:event];
[self customDrawShape];
}

Creating a UIImage from a rotated UIImageView

I have a UIImageView with an image in it. I have rotated the image prior to display by setting the transform property of the UIImageView to CGAffineTransformMakeRotation(angle) where angle is the angle in radians.
I want to be able to create another UIImage that corresponds to the rotated version that I can see in my view.
I am almost there, by rotating the image context I get a rotated image:
- (UIImage *) rotatedImageFromImageView: (UIImageView *) imageView
{
UIImage *rotatedImage;
// Get image width, height of the bounding rectangle
CGRect boundingRect = [self getBoundingRectAfterRotation: imageView.bounds byAngle:angle];
// Create a graphics context the size of the bounding rectangle
UIGraphicsBeginImageContext(boundingRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// Rotate and translate the context
CGAffineTransform ourTransform = CGAffineTransformIdentity;
ourTransform = CGAffineTransformConcat(ourTransform, CGAffineTransformMakeRotation(angle));
CGContextConcatCTM(context, ourTransform);
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(0, 0, imageView.image.size.width, imageView.image.size.height), imageView.image.CGImage);
// Get an image from the context
rotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
// Clean up
UIGraphicsEndImageContext();
return rotatedImage;
}
However the image is not rotated about its centre. I have tried all kinds of transforms concatenated with my rotate to get it to rotate around the centre but to no avail. Am I missing a trick? Is this even possible since I am rotating the context not the image?
Getting desperate to make this work now, so any help would be appreciated.
Dave
EDIT: I've been asked several times for my boundingRect code, so here it is:
- (CGRect) getBoundingRectAfterRotation: (CGRect) rectangle byAngle: (CGFloat) angleOfRotation {
// Calculate the width and height of the bounding rectangle using basic trig
CGFloat newWidth = rectangle.size.width * fabs(cosf(angleOfRotation)) + rectangle.size.height * fabs(sinf(angleOfRotation));
CGFloat newHeight = rectangle.size.height * fabs(cosf(angleOfRotation)) + rectangle.size.width * fabs(sinf(angleOfRotation));
// Calculate the position of the origin
CGFloat newX = rectangle.origin.x + ((rectangle.size.width - newWidth) / 2);
CGFloat newY = rectangle.origin.y + ((rectangle.size.height - newHeight) / 2);
// Return the rectangle
return CGRectMake(newX, newY, newWidth, newHeight);
}
OK - at last I seem to have done it. Any comments on the correctness would be useful... needed a translate, a rotate, a scale and an offset from the drawing rect position to make it work. Code is here:
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, boundingRect.size.width/2, boundingRect.size.height/2);
transform = CGAffineTransformRotate(transform, angle);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(-imageView.image.size.width/2, -imageView.image.size.height/2, imageView.image.size.width, imageView.image.size.height), imageView.image.CGImage);
// Get an image from the context
rotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];

Resources