I'm rendering an image with text for one of my apps and has a noticeable impact on UI performance (can be as big as ~1 second freeze), so I am doing it on a background thread. Since the image has text, using UILabels and other UIViews makes it easy to lay everything out, and I render the view containing everything to an image.
However, I get a warning from Xcode saying that it's not allowed on the background thread because it uses UIKit. Why am I not allowed to call UIKit on the background thread even though my use case is completely self-contained and isolated from any rendering onscreen?
To help the code below make more sense, it draws an image that is a listing of several items, each of which consists of two small square images and the name of the item all in a row. The list can have several columns. The code has been tweaked slightly (mostly variable names) to avoid showing proprietary code, but does the same job.
My code:
NSArray<MyItem*>* items; // These are the items that I'm drawing. They
// get set before the following code is called.
// Processing code:
const CGFloat TITLE_FONT_SIZE = 50; // font size of the title
const CGFloat ITEM_FONT_SIZE = 25; // font size of the item names
const int OUTER_PADDING = 60; // padding from the edge of the image to the main content
const int ROW_PADDING = 13; // padding between rows
const int COL_PADDING = 100; // padding between columns
const int PADDING = 20; // padding between content items in a row
const int BOX_SIZE = 25; // how high/wide each image is
const int ROW_HEIGHT = BOX_SIZE; // pixel height of a line
const int COL_WIDTH = 500; // pixel width of a column (image1, image2, and name)
// compute the dimensions of the image
UILabel* titleLabel = [[UILabel alloc] init];
titleLabel.font = [UIFont systemFontOfSize:TITLE_FONT_SIZE];
titleLabel.text = #"My image";
[titleLabel sizeToFit];
titleLabel.frame = CGRectMake(OUTER_PADDING, OUTER_PADDING / 2, titleLabel.frame.size.width, titleLabel.frame.size.height);
const int MIN_NUM_COLS = 1 + ((titleLabel.frame.size.width - COL_WIDTH) / (COL_WIDTH + COL_PADDING));
const int NORMAL_NUM_COLS = (int)ceil(sqrt([items count] / (COL_WIDTH / (ROW_HEIGHT))));
const int NUM_COLS = (MIN_NUM_COLS > NORMAL_NUM_COLS ? MIN_NUM_COLS : NORMAL_NUM_COLS);
const int NUM_ROWS = (int)ceil([items count] / (float)NUM_COLS);
const int NUM_OVERFLOW_ROWS = [items count] % NUM_ROWS;
const int titleWidth = titleLabel.frame.size.width;
const int defaultWidth = (NUM_COLS * (COL_WIDTH + COL_PADDING)) - COL_PADDING;
const int pixelWidth = (2 * OUTER_PADDING) + (titleWidth > defaultWidth ? titleWidth : defaultWidth);
const int pixelHeight = (2 * OUTER_PADDING) + (TITLE_FONT_SIZE + PADDING) + (NUM_ROWS * (ROW_HEIGHT + ROW_PADDING)) - ROW_PADDING;
const int nbytes = 4 * pixelHeight * pixelWidth;
byte* data = (byte*)malloc(sizeof(byte) * nbytes);
memset(data, 255, nbytes);
CGContextRef context = CGBitmapContextCreate(data, pixelWidth, pixelHeight, 8, 4 * pixelWidth, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault | kCGImageAlphaNoneSkipLast);
// --------------------------------------------------
// create a view heirarchy and then draw to our context
UIView* mainView = [[UIView alloc] init];
[mainView addSubview:titleLabel];
// setup all the views
int keyIndex = 0;
CGFloat x = OUTER_PADDING;
CGFloat starty = titleLabel.frame.origin.y + titleLabel.frame.size.height + PADDING;
for (int col = 0; col < NUM_COLS; col++)
{
int nrows = (col == NUM_COLS + 1 ? NUM_OVERFLOW_ROWS : NUM_ROWS);
CGFloat y = starty;
for (int row = 0; (row < nrows) && (keyIndex < [items count]); row++)
{
CGFloat tempx = x;
MyItem* item = [items objectAtIndex:keyIndex];
UIImageView* imageview1 = [[UIImageView alloc] initWithImage:item.image1];
imageview1.frame = CGRectMake(tempx, y, BOX_SIZE, BOX_SIZE);
[mainView addSubview:imageview1];
tempx += BOX_SIZE + PADDING;
UIImageView* imageview2 = [[UIImageView alloc] initWithImage:item.imageview2];
imageview2.frame = CGRectMake(tempx, y, BOX_SIZE, BOX_SIZE);
[mainView addSubview:imageview2];
tempx += BOX_SIZE + PADDING;
UILabel* label = [[UILabel alloc] init];
label.font = [UIFont systemFontOfSize:ITEM_FONT_SIZE];
label.text = item.name;
[label sizeToFit];
label.center = CGPointMake(tempx + (label.frame.size.width / 2), imageview2.center.y);
[mainView addSubview:label];
y += ROW_HEIGHT + ROW_PADDING;
keyIndex++;
}
x += COL_WIDTH + COL_PADDING;
}
// --------------------------------------------------
// draw everything to actually generate the image
CGContextConcatCTM(context, CGAffineTransformMake(1, 0, 0, -1, 0, pixelHeight));
[mainView.layer renderInContext:context];
CGImageRef cgimage = CGBitmapContextCreateImage(context);
myCoolImage = [UIImage imageWithCGImage:cgimage];
CGImageRelease(cgimage);
CGContextRelease(context);
free(data);
As we've established in comments, what you're doing is both illegitimate and slow.
Arranging and sizing UILabel and UIImageView objects is slow, and calling
CALayer renderInContext is really slow.
And it isn't how you draw.
Everything you're doing has its analogue in the actual drawing world (Quartz 2D), and if you did it that way, not only would it be legal in the background, it probably wouldn't even need to be in the background because it would be so much faster. So:
Every place you use a UILabel, you can achieve exactly the same effect by using NSAttributedString draw... commands.
Every place you use a UIImageView, you can achieve exactly the same effect by using UIImage draw... commands.
Any of us who does any extensive drawing has learned to create structured layouts of the type you're making by using actual drawing code, and now is your chance to learn to do it too.
XCode has the ability to capture Opengl ES frames from the iPad, and that's great! I would like to extend this functionality and capture an entire Opengl ES movie of my application. Is there a way for that?
if it's not possible using XCode, how can i do it without much effort and big changes on my code? thank you very much!
I use a very simple technique, which requires just a few lines of code.
You can capture each OGL frame into UIImage using this code:
- (UIImage*)captureScreen {
NSInteger dataLength = framebufferWidth * framebufferHeight * 4;
// Allocate array.
GLuint *buffer = (GLuint *) malloc(dataLength);
GLuint *resultsBuffer = (GLuint *)malloc(dataLength);
// Read data
glReadPixels(0, 0, framebufferWidth, framebufferHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// Flip vertical
for(int y = 0; y < framebufferHeight; y++) {
for(int x = 0; x < framebufferWidth; x++) {
resultsBuffer[x + y * framebufferWidth] = buffer[x + (framebufferHeight - 1 - y) * framebufferWidth];
}
}
free(buffer);
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, resultsBuffer, dataLength, releaseScreenshotData);
// prep the ingredients
const int bitsPerComponent = 8;
const int bitsPerPixel = 4 * bitsPerComponent;
const int bytesPerRow = 4 * framebufferWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(framebufferWidth, framebufferHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
// then make the UIImage from that
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return image;
}
Then you will capture each frame in your main loop:
- (void)onTimer {
// Compute and render new frame
[self update];
// Recording
if (recordingMode == RecordingModeMovie) {
recordingFrameNum++;
// Save frame
UIImage *image = [self captureScreen];
NSString *fileName = [NSString stringWithFormat:#"%d.jpg", (int)recordingFrameNum];
[UIImageJPEGRepresentation(image, 1.0) writeToFile:[basePath stringByAppendingPathComponent:fileName] atomically:NO];
}
}
At the end you will have tons of JPEG files which can be easily converted into a movie by
Time Lapse Assembler
If you want to have nice 30FPS movie, hard fix your calc steps to 1 / 30.0 sec per frame.
I want to create a image out of Core OpenGL context.
I used following code but it creates a black image. So I guess I cannot use glReadPixles there? Any other suggestions please?
int myDataLength = 480 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 480; y++)
{
for(int x = 0; x < 320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef image= CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, false, renderingIntent);
//PRINT image... Its black!!!!!!
CGDataProviderRelease(provider);
free(buffer);
free(buffer2);
Before you do a glReadPixels call you must
set proper packing (see glPixelStorei reference page)
select the right buffer to read from with glReadBuffer (front after swapping, back before swapping, I recommend swap and read from front)
I'm writing application that operates on black&white images. I'm doing it by passing a NSImage object into my method and then making NSBitmapImageRep from NSImage. All works but quite slow. Here's my code:
- (NSImage *)skeletonization: (NSImage *)image
{
int x = 0, y = 0;
NSUInteger pixelVariable = 0;
NSBitmapImageRep *bitmapImageRep = [[NSBitmapImageRep alloc] initWithData:[image TIFFRepresentation]];
[myHelpText setIntValue:[bitmapImageRep pixelsWide]];
[myHelpText2 setIntValue:[bitmapImageRep pixelsHigh]];
NSColor *black = [NSColor blackColor];
NSColor *white = [NSColor whiteColor];
[myColor set];
[myColor2 set];
for (x=0; x<=[bitmapImageRep pixelsWide]; x++) {
for (y=0; y<=[bitmapImageRep pixelsHigh]; y++) {
// This is only to see if it's working
[bitmapImageRep setColor:myColor atX:x y:y];
}
}
[myColor release];
[myColor2 release];
NSImage *producedImage = [[NSImage alloc] init];
[producedImage addRepresentation:bitmapImageRep];
[bitmapImageRep release];
return [producedImage autorelease];
}
So I tried to use CIImage but I don't know how to get into each pixel by (x,y) coordinates. That is really important.
Use the representations array property from NSImage, to get your NSBitmapImageRep. It should be faster than serializing your image to a TIFF and then back.
Use the bitmapData property of the NSBitmapImageRep to access the image bytes directly.
eg
unsigned char black = 0;
unsigned char white = 255;
NSBitmapImageRep* bitmapImageRep = [[image representations] firstObject];
// you will need to do checks here to determine the pixelformat of your bitmap data
unsigned char* imageData = [bitmapImageRep bitmapData];
int rowBytes = [bitmapImageRep bytesPerRow];
int bpp = [bitmapImageRep bitsPerPixel] / 8;
for (x=0; x<[bitmapImageRep pixelsWide]; x++) { // don't use <=
for (y=0; y<[bitmapImageRep pixelsHigh]; y++) {
*(imageData + y * rowBytes + x * bpp ) = black; // Red
*(imageData + y * rowBytes + x * bpp +1) = black; // Green
*(imageData + y * rowBytes + x * bpp +2) = black; // Blue
*(imageData + y * rowBytes + x * bpp +3) = 255; // Alpha
}
}
You will need to know what pixel format you are using in your images before you can go playing with its data, look at the bitsPerPixel property of NSBitmapImageRep to help determine if your image is in RGBA format.
You could be working with a gray scale image, or an RGB image, or possibly CMYK. And either convert the image to what you want first. Or handle the data in the loop differently.
I have a UIImage (Cocoa Touch). From that, I'm happy to get a CGImage or anything else you'd like that's available. I'd like to write this function:
- (int)getRGBAFromImage:(UIImage *)image atX:(int)xx andY:(int)yy {
// [...]
// What do I want to read about to help
// me fill in this bit, here?
// [...]
int result = (red << 24) | (green << 16) | (blue << 8) | alpha;
return result;
}
FYI, I combined Keremk's answer with my original outline, cleaned-up the typos, generalized it to return an array of colors and got the whole thing to compile. Here is the result:
+ (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)x andY:(int)y count:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
NSUInteger byteIndex = (bytesPerRow * y) + x * bytesPerPixel;
for (int i = 0 ; i < count ; ++i)
{
CGFloat alpha = ((CGFloat) rawData[byteIndex + 3] ) / 255.0f;
CGFloat red = ((CGFloat) rawData[byteIndex] ) / alpha;
CGFloat green = ((CGFloat) rawData[byteIndex + 1] ) / alpha;
CGFloat blue = ((CGFloat) rawData[byteIndex + 2] ) / alpha;
byteIndex += bytesPerPixel;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
One way of doing it is to draw the image to a bitmap context that is backed by a given buffer for a given colorspace (in this case it is RGB): (note that this will copy the image data to that buffer, so you do want to cache it instead of doing this operation every time you need to get pixel values)
See below as a sample:
// First get the image into your data buffer
CGImageRef image = [myUIImage CGImage];
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height));
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
red = rawData[byteIndex];
green = rawData[byteIndex + 1];
blue = rawData[byteIndex + 2];
alpha = rawData[byteIndex + 3];
Apple's Technical Q&A QA1509 shows the following simple approach:
CFDataRef CopyImagePixels(CGImageRef inImage)
{
return CGDataProviderCopyData(CGImageGetDataProvider(inImage));
}
Use CFDataGetBytePtr to get to the actual bytes (and various CGImageGet* methods to understand how to interpret them).
Couldn't believe that there is not one single correct answer here. No need to allocate pointers, and the unmultiplied values still need to be normalized. To cut to the chase, here is the correct version for Swift 4. For UIImage just use .cgImage.
extension CGImage {
func colors(at: [CGPoint]) -> [UIColor]? {
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bytesPerPixel = 4
let bytesPerRow = bytesPerPixel * width
let bitsPerComponent = 8
let bitmapInfo: UInt32 = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Big.rawValue
guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo),
let ptr = context.data?.assumingMemoryBound(to: UInt8.self) else {
return nil
}
context.draw(self, in: CGRect(x: 0, y: 0, width: width, height: height))
return at.map { p in
let i = bytesPerRow * Int(p.y) + bytesPerPixel * Int(p.x)
let a = CGFloat(ptr[i + 3]) / 255.0
let r = (CGFloat(ptr[i]) / a) / 255.0
let g = (CGFloat(ptr[i + 1]) / a) / 255.0
let b = (CGFloat(ptr[i + 2]) / a) / 255.0
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
}
The reason you have to draw/convert the image first into a buffer is because images can have several different formats. This step is required to convert it to a consistent format you can read.
NSString * path = [[NSBundle mainBundle] pathForResource:#"filename" ofType:#"jpg"];
UIImage * img = [[UIImage alloc]initWithContentsOfFile:path];
CGImageRef image = [img CGImage];
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(image));
const unsigned char * buffer = CFDataGetBytePtr(data);
Here is a SO thread where #Matt renders only the desired pixel into a 1x1 context by displacing the image so that the desired pixel aligns with the one pixel in the context.
Swift 5 version
The answers given here are either outdated or incorrect because they don't take into account the following:
The pixel size of the image can differ from its point size that is returned by image.size.width/image.size.height.
There can be various layouts used by pixel components in the image, such as BGRA, ABGR, ARGB etc. or may not have an alpha component at all, such as BGR and RGB. For example, UIView.drawHierarchy(in:afterScreenUpdates:) method can produce BGRA images.
Color components can be premultiplied by the alpha for all pixels in the image and need to be divided by alpha in order to restore the original color.
For memory optimization used by CGImage, the size of a pixel row in bytes can be greater than the mere multiplication of the pixel width by 4.
The code below is to provide a universal Swift 5 solution to get the UIColor of a pixel for all such special cases. The code is optimized for usability and clarity, not for performance.
public extension UIImage {
var pixelWidth: Int {
return cgImage?.width ?? 0
}
var pixelHeight: Int {
return cgImage?.height ?? 0
}
func pixelColor(x: Int, y: Int) -> UIColor {
assert(
0..<pixelWidth ~= x && 0..<pixelHeight ~= y,
"Pixel coordinates are out of bounds")
guard
let cgImage = cgImage,
let data = cgImage.dataProvider?.data,
let dataPtr = CFDataGetBytePtr(data),
let colorSpaceModel = cgImage.colorSpace?.model,
let componentLayout = cgImage.bitmapInfo.componentLayout
else {
assertionFailure("Could not get a pixel of an image")
return .clear
}
assert(
colorSpaceModel == .rgb,
"The only supported color space model is RGB")
assert(
cgImage.bitsPerPixel == 32 || cgImage.bitsPerPixel == 24,
"A pixel is expected to be either 4 or 3 bytes in size")
let bytesPerRow = cgImage.bytesPerRow
let bytesPerPixel = cgImage.bitsPerPixel/8
let pixelOffset = y*bytesPerRow + x*bytesPerPixel
if componentLayout.count == 4 {
let components = (
dataPtr[pixelOffset + 0],
dataPtr[pixelOffset + 1],
dataPtr[pixelOffset + 2],
dataPtr[pixelOffset + 3]
)
var alpha: UInt8 = 0
var red: UInt8 = 0
var green: UInt8 = 0
var blue: UInt8 = 0
switch componentLayout {
case .bgra:
alpha = components.3
red = components.2
green = components.1
blue = components.0
case .abgr:
alpha = components.0
red = components.3
green = components.2
blue = components.1
case .argb:
alpha = components.0
red = components.1
green = components.2
blue = components.3
case .rgba:
alpha = components.3
red = components.0
green = components.1
blue = components.2
default:
return .clear
}
// If chroma components are premultiplied by alpha and the alpha is `0`,
// keep the chroma components to their current values.
if cgImage.bitmapInfo.chromaIsPremultipliedByAlpha && alpha != 0 {
let invUnitAlpha = 255/CGFloat(alpha)
red = UInt8((CGFloat(red)*invUnitAlpha).rounded())
green = UInt8((CGFloat(green)*invUnitAlpha).rounded())
blue = UInt8((CGFloat(blue)*invUnitAlpha).rounded())
}
return .init(red: red, green: green, blue: blue, alpha: alpha)
} else if componentLayout.count == 3 {
let components = (
dataPtr[pixelOffset + 0],
dataPtr[pixelOffset + 1],
dataPtr[pixelOffset + 2]
)
var red: UInt8 = 0
var green: UInt8 = 0
var blue: UInt8 = 0
switch componentLayout {
case .bgr:
red = components.2
green = components.1
blue = components.0
case .rgb:
red = components.0
green = components.1
blue = components.2
default:
return .clear
}
return .init(red: red, green: green, blue: blue, alpha: UInt8(255))
} else {
assertionFailure("Unsupported number of pixel components")
return .clear
}
}
}
public extension UIColor {
convenience init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
self.init(
red: CGFloat(red)/255,
green: CGFloat(green)/255,
blue: CGFloat(blue)/255,
alpha: CGFloat(alpha)/255)
}
}
public extension CGBitmapInfo {
enum ComponentLayout {
case bgra
case abgr
case argb
case rgba
case bgr
case rgb
var count: Int {
switch self {
case .bgr, .rgb: return 3
default: return 4
}
}
}
var componentLayout: ComponentLayout? {
guard let alphaInfo = CGImageAlphaInfo(rawValue: rawValue & Self.alphaInfoMask.rawValue) else { return nil }
let isLittleEndian = contains(.byteOrder32Little)
if alphaInfo == .none {
return isLittleEndian ? .bgr : .rgb
}
let alphaIsFirst = alphaInfo == .premultipliedFirst || alphaInfo == .first || alphaInfo == .noneSkipFirst
if isLittleEndian {
return alphaIsFirst ? .bgra : .abgr
} else {
return alphaIsFirst ? .argb : .rgba
}
}
var chromaIsPremultipliedByAlpha: Bool {
let alphaInfo = CGImageAlphaInfo(rawValue: rawValue & Self.alphaInfoMask.rawValue)
return alphaInfo == .premultipliedFirst || alphaInfo == .premultipliedLast
}
}
UIImage is a wrapper the bytes are CGImage or CIImage
According the the Apple Reference on UIImage the object is immutable and you have no access to the backing bytes. While it is true that you can access the CGImage data if you populated the UIImage with a CGImage (explicitly or implicitly), it will return NULL if the UIImage is backed by a CIImage and vice-versa.
Image objects not provide direct access to their underlying image
data. However, you can retrieve the image data in other formats for
use in your app. Specifically, you can use the cgImage and ciImage
properties to retrieve versions of the image that are compatible with
Core Graphics and Core Image, respectively. You can also use the
UIImagePNGRepresentation(:) and UIImageJPEGRepresentation(:_:)
functions to generate an NSData object containing the image data in
either the PNG or JPEG format.
Common tricks to getting around this issue
As stated your options are
UIImagePNGRepresentation or JPEG
Determine if image has CGImage or CIImage backing data and get it there
Neither of these are particularly good tricks if you want output that isn't ARGB, PNG, or JPEG data and the data isn't already backed by CIImage.
My recommendation, try CIImage
While developing your project it might make more sense for you to avoid UIImage altogether and pick something else. UIImage, as a Obj-C image wrapper, is often backed by CGImage to the point where we take it for granted. CIImage tends to be a better wrapper format in that you can use a CIContext to get out the format you desire without needing to know how it was created. In your case, getting the bitmap would be a matter of calling
- render:toBitmap:rowBytes:bounds:format:colorSpace:
As an added bonus you can start doing nice manipulations to the image by chaining filters onto the image. This solves a lot of the issues where the image is upside down or needs to be rotated/scaled etc.
Building on Olie and Algal's answer, here is an updated answer for Swift 3
public func getRGBAs(fromImage image: UIImage, x: Int, y: Int, count: Int) -> [UIColor] {
var result = [UIColor]()
// First get the image into your data buffer
guard let cgImage = image.cgImage else {
print("CGContext creation failed")
return []
}
let width = cgImage.width
let height = cgImage.height
let colorSpace = CGColorSpaceCreateDeviceRGB()
let rawdata = calloc(height*width*4, MemoryLayout<CUnsignedChar>.size)
let bytesPerPixel = 4
let bytesPerRow = bytesPerPixel * width
let bitsPerComponent = 8
let bitmapInfo: UInt32 = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Big.rawValue
guard let context = CGContext(data: rawdata, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
print("CGContext creation failed")
return result
}
context.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))
// Now your rawData contains the image data in the RGBA8888 pixel format.
var byteIndex = bytesPerRow * y + bytesPerPixel * x
for _ in 0..<count {
let alpha = CGFloat(rawdata!.load(fromByteOffset: byteIndex + 3, as: UInt8.self)) / 255.0
let red = CGFloat(rawdata!.load(fromByteOffset: byteIndex, as: UInt8.self)) / alpha
let green = CGFloat(rawdata!.load(fromByteOffset: byteIndex + 1, as: UInt8.self)) / alpha
let blue = CGFloat(rawdata!.load(fromByteOffset: byteIndex + 2, as: UInt8.self)) / alpha
byteIndex += bytesPerPixel
let aColor = UIColor(red: red, green: green, blue: blue, alpha: alpha)
result.append(aColor)
}
free(rawdata)
return result
}
swift
To access the raw RGB values of an UIImage in Swift 5 use the underlying CGImage and its dataProvider:
import UIKit
let image = UIImage(named: "example.png")!
guard let cgImage = image.cgImage,
let data = cgImage.dataProvider?.data,
let bytes = CFDataGetBytePtr(data) else {
fatalError("Couldn't access image data")
}
assert(cgImage.colorSpace?.model == .rgb)
let bytesPerPixel = cgImage.bitsPerPixel / cgImage.bitsPerComponent
for y in 0 ..< cgImage.height {
for x in 0 ..< cgImage.width {
let offset = (y * cgImage.bytesPerRow) + (x * bytesPerPixel)
let components = (r: bytes[offset], g: bytes[offset + 1], b: bytes[offset + 2])
print("[x:\(x), y:\(y)] \(components)")
}
print("---")
}
https://www.ralfebert.de/ios/examples/image-processing/uiimage-raw-pixels/
Based on different answers but mainly on this, this works for what I need:
UIImage *image1 = ...; // The image from where you want a pixel data
int pixelX = ...; // The X coordinate of the pixel you want to retrieve
int pixelY = ...; // The Y coordinate of the pixel you want to retrieve
uint32_t pixel1; // Where the pixel data is to be stored
CGContextRef context1 = CGBitmapContextCreate(&pixel1, 1, 1, 8, 4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
CGContextDrawImage(context1, CGRectMake(-pixelX, -pixelY, CGImageGetWidth(image1.CGImage), CGImageGetHeight(image1.CGImage)), image1.CGImage);
CGContextRelease(context1);
As a result of this lines, you will have a pixel in AARRGGBB format with alpha always set to FF in the 4 byte unsigned integer pixel1.