How to check uiimage is empty or blank - uiimage

I am using following code to save uiimage. Before saving the image, i want to check the image is empty or blank.
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Note :
UIImage, check if contain image
How to check if a uiimage is blank? (empty, transparent)
I couldn't find solution in the above link. Thanks!!!

I found a solution for the above problem. This may help anyone in future. This below function works perfectly for even PNG transparent image.
-(BOOL)isBlankImage:(UIImage *)myImage
{
typedef struct
{
uint8_t red;
uint8_t green;
uint8_t blue;
uint8_t alpha;
} MyPixel_T;
CGImageRef myCGImage = [myImage CGImage];
//Get a bitmap context for the image
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, CGImageGetWidth(myCGImage), CGImageGetHeight(myCGImage),
CGImageGetBitsPerComponent(myCGImage), CGImageGetBytesPerRow(myCGImage),
CGImageGetColorSpace(myCGImage), CGImageGetBitmapInfo(myCGImage));
//Draw the image into the context
CGContextDrawImage(bitmapContext, CGRectMake(0, 0, CGImageGetWidth(myCGImage), CGImageGetHeight(myCGImage)), myCGImage);
//Get pixel data for the image
MyPixel_T *pixels = CGBitmapContextGetData(bitmapContext);
size_t pixelCount = CGImageGetWidth(myCGImage) * CGImageGetHeight(myCGImage);
for(size_t i = 0; i < pixelCount; i++)
{
MyPixel_T p = pixels[i];
//Your definition of what's blank may differ from mine
if(p.red > 0 || p.green > 0 || p.blue > 0 || p.alpha > 0)
return NO;
}
return YES;
}

Related

How do I return a CGImageRef from CGWindowListCreateImage to C#?

I'm currently attempting to write a plugin for macOS for Unity. I am taking a screenshot of the desktop with CGWindowListCreateImage. I'm trying to figure out how to return the byte[] data to C# so I can create a Texture2D from it. Any help would be greatly appreciated, thank you.
It doesn't want me to return a NSArray* The .h file is at the bottom.
NSArray* getScreenshot()
{
CGImageRef screenShot = CGWindowListCreateImage( CGRectInfinite, kCGWindowListOptionOnScreenOnly, kCGNullWindowID, kCGWindowImageDefault);
return getRGBAsFromImage(screenShot);
}
NSArray* getRGBAsFromImage(CGImageRef imageRef)
{
// First get the image into your data buffer
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
NSUInteger bytesPerPixel = 4;
unsigned long count = width * height * bytesPerPixel;
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) alloca(height * width * 4);
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int x = 0, y = 0;
NSUInteger byteIndex = (bytesPerRow * y) + x * bytesPerPixel;
for (int i = 0 ; i < count ; ++i)
{
CGFloat alpha = ((CGFloat) rawData[byteIndex + 3] ) / 255.0f;
CGFloat red = ((CGFloat) rawData[byteIndex] ) / alpha;
CGFloat green = ((CGFloat) rawData[byteIndex + 1] ) / alpha;
CGFloat blue = ((CGFloat) rawData[byteIndex + 2] ) / alpha;
byteIndex += bytesPerPixel;
NSColor *acolor = [NSColor colorWithRed:red green:green blue:blue alpha:alpha];
[result insertObject:acolor atIndex:count];
}
free(rawData);
return result;
}
#ifndef TestMethods_hpp
#define TestMethods_hpp
#import <Foundation/Foundation.h>
#include <Carbon/Carbon.h>
#include <stdio.h>
#include <AppKit/AppKit.h>
typedef void (*Unity_Callback1)(char * message);
extern "C" {
NSArray* getScreenshot();
}
#endif /* TestMethods_h */

Xamarin iOS: How to change the color of a UIImage pixel by pixel

Sorry if this has already been answered somewhere but I could not find it.
Basically, I am receiving a QR code where the code itself is black and the background is white (this is a UIImage). I would like to change to the color of the white background to transparent or a custom color and change the QRCode color from black to white. (In Xamarin iOS)
I already know how to get the color of a specific Pixel using the following code:
static UIColor GetPixelColor(CGBitmapContext context, byte[] rawData,
UIImage barcode, CGPoint pt)
{
var handle = GCHandle.Alloc(rawData);
UIColor resultColor = null;
using (context)
{
context.DrawImage(new CGRect(-pt.X, pt.Y - barcode.Size.Height,
barcode.Size.Width, barcode.Size.Height), barcode.CGImage);
float red = (rawData[0]) / 255.0f;
float green = (rawData[1]) / 255.0f;
float blue = (rawData[2]) / 255.0f;
float alpha = (rawData[3]) / 255.0f;
resultColor = UIColor.FromRGBA(red, green, blue, alpha);
}
return resultColor;
}
This is currently my function:
static UIImage GetRealQRCode(UIImage barcode)
{
int width = (int)barcode.Size.Width;
int height = (int)barcode.Size.Height;
var bytesPerPixel = 4;
var bytesPerRow = bytesPerPixel * width;
var bitsPerComponent = 8;
var colorSpace = CGColorSpace.CreateDeviceRGB();
var rawData = new byte[bytesPerRow * height];
CGBitmapContext context = new CGBitmapContext(rawData, width,
height, bitsPerComponent, bytesPerRow, colorSpace,
CGImageAlphaInfo.PremultipliedLast);
for (int i = 0; i < rawData.Length; i++)
{
for (int j = 0; j < bytesPerRow; j++)
{
CGPoint pt = new CGPoint(i, j);
UIColor currentColor = GetPixelColor(context, rawData,
barcode, pt);
}
}
}
Anyone know how to do this ?
Thanks in advance !
Assuming your UIImage is backed by a CGImage (and not a CIImage):
var cgImage = ImageView1.Image.CGImage; // Your UIImage with a CGImage backing image
var bytesPerPixel = 4;
var bitsPerComponent = 8;
var bytesPerUInt32 = sizeof(UInt32) / sizeof(byte);
var width = cgImage.Width;
var height = cgImage.Height;
var bytesPerRow = bytesPerPixel * cgImage.Width;
var numOfBytes = cgImage.Height * cgImage.Width * bytesPerUInt32;
IntPtr pixelPtr = IntPtr.Zero;
try
{
pixelPtr = Marshal.AllocHGlobal((int)numOfBytes);
using (var colorSpace = CGColorSpace.CreateDeviceRGB())
{
CGImage newCGImage;
using (var context = new CGBitmapContext(pixelPtr, width, height, bitsPerComponent, bytesPerRow, colorSpace, CGImageAlphaInfo.PremultipliedLast))
{
context.DrawImage(new CGRect(0, 0, width, height), cgImage);
unsafe
{
var currentPixel = (byte*)pixelPtr.ToPointer();
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
// RGBA8888 pixel format
if (*currentPixel == byte.MinValue)
{
*currentPixel = byte.MaxValue;
*(currentPixel + 1) = byte.MaxValue;
*(currentPixel + 2) = byte.MaxValue;
}
else
{
*currentPixel = byte.MinValue;
*(currentPixel + 1) = byte.MinValue;
*(currentPixel + 2) = byte.MinValue;
*(currentPixel + 3) = byte.MinValue;
}
currentPixel += 4;
}
}
}
newCGImage = context.ToImage();
}
var uiimage = new UIImage(newCGImage);
imageView2.Image = uiimage; // Do something with your new UIImage
}
}
finally
{
if (pixelPtr != IntPtr.Zero)
Marshal.FreeHGlobal(pixelPtr);
}
If you do not actually need access to the individual pixels but the end result only, using CoreImage pre-exisitng filters you can first invert the colors and then use the black pixels as an alpha mask. Otherwise see my other answer using Marshal.AllocHGlobal and pointers.
using (var coreImage = new CIImage(ImageView1.Image))
using (var invertFilter = CIFilter.FromName("CIColorInvert"))
{
invertFilter.Image = coreImage;
using (var alphaMaskFiter = CIFilter.FromName("CIMaskToAlpha"))
{
alphaMaskFiter.Image = invertFilter.OutputImage;
var newCoreImage = alphaMaskFiter.OutputImage;
var uiimage = new UIImage(newCoreImage);
imageView2.Image = uiimage; // Do something with your new UIImage
}
}
The plus side is this is blazing fast ;-) and the results are the same:
If you need even faster processing assuming you are batch converting a number of these images, you can write a custom CIKernel that incorporates these two filters into one kernel and thus only process the image once.
Xamarin.IOS with this method you can convert all color white to transparent for me only works with files ".jpg" with .png doesn't work but you can convert the files to jpg and call this method.
public static UIImage ProcessImage (UIImage image)
{
CGImage rawImageRef = image.CGImage;
nfloat[] colorMasking = new nfloat[6] { 222, 255, 222, 255, 222, 255 };
CGImage imageRef = rawImageRef.WithMaskingColors(colorMasking);
UIImage imageB = UIImage.FromImage(imageRef);
return imageB;
}
Regards

Confused about NSImageView scaling

I'm trying to display a simple NSImageView with it's image centered without scaling it like this:
Just like iOS does when you set an UIView's contentMode = UIViewContentModeCenter
So I tried all NSImageScaling values, this is what I get when I chose NSScaleNone
I really don't understand what's going on :-/
You can manually generate the image of the correct size and content, and set it to be the image of the NSImageView so that NSImageView doesn't need to do anything.
NSImage *newImg = [self resizeImage:sourceImage size:newSize];
[aNSImageView setImage:newImg];
The following function resizes an image to fit the new size, keeping the aspect ratio intact. If the image is smaller than the new size, it is scaled up and filled with the new frame. If the image is larger than the new size, it is downsized, and filled with the new frame
- (NSImage*) resizeImage:(NSImage*)sourceImage size:(NSSize)size{
NSRect targetFrame = NSMakeRect(0, 0, size.width, size.height);
NSImage* targetImage = [[NSImage alloc] initWithSize:size];
NSSize sourceSize = [sourceImage size];
float ratioH = size.height/ sourceSize.height;
float ratioW = size.width / sourceSize.width;
NSRect cropRect = NSZeroRect;
if (ratioH >= ratioW) {
cropRect.size.width = floor (size.width / ratioH);
cropRect.size.height = sourceSize.height;
} else {
cropRect.size.width = sourceSize.width;
cropRect.size.height = floor(size.height / ratioW);
}
cropRect.origin.x = floor( (sourceSize.width - cropRect.size.width)/2 );
cropRect.origin.y = floor( (sourceSize.height - cropRect.size.height)/2 );
[targetImage lockFocus];
[sourceImage drawInRect:targetFrame
fromRect:cropRect //portion of source image to draw
operation:NSCompositeCopy //compositing operation
fraction:1.0 //alpha (transparency) value
respectFlipped:YES //coordinate system
hints:#{NSImageHintInterpolation:
[NSNumber numberWithInt:NSImageInterpolationLow]}];
[targetImage unlockFocus];
return targetImage;}
Here's an awesome category for NSImage: NSImage+ContentMode
It allows content modes like in iOS, works great.
Set image scaling property to NSImageScaleAxesIndependently which will scale image to fill rectangle.This will not preserve aspect ratio.
Swift version of #Shagru's answer (without the hints)
func resizeImage(_ sourceImage:NSImage, size:CGSize) -> NSImage
{
let targetFrame = CGRect(origin: CGPoint.zero, size: size);
let targetImage = NSImage.init(size: size)
let sourceSize = sourceImage.size
let ratioH = size.height / sourceSize.height;
let ratioW = size.width / sourceSize.width;
var cropRect = CGRect.zero;
if (ratioH >= ratioW) {
cropRect.size.width = floor (size.width / ratioH);
cropRect.size.height = sourceSize.height;
} else {
cropRect.size.width = sourceSize.width;
cropRect.size.height = floor(size.height / ratioW);
}
cropRect.origin.x = floor( (sourceSize.width - cropRect.size.width)/2 );
cropRect.origin.y = floor( (sourceSize.height - cropRect.size.height)/2 );
targetImage.lockFocus()
sourceImage.draw(in: targetFrame, from: cropRect, operation: .copy, fraction: 1.0, respectFlipped: true, hints: nil )
targetImage.unlockFocus()
return targetImage;
}

UIImagePickerController returning incorrect image orientation

I'm using UIImagePickerController to capture an image and then store it. However, when i try to rescale it, the orientation value i get out of this image is incorrect. When i take a snap by holding the phone Up, it gives me orientation of Left. Has anyone experienced this issue?
The UIImagePickerController dictionary shows following information:
{
UIImagePickerControllerMediaMetadata = {
DPIHeight = 72;
DPIWidth = 72;
Orientation = 3;
"{Exif}" = {
ApertureValue = "2.970853654340484";
ColorSpace = 1;
DateTimeDigitized = "2011:02:14 10:26:17";
DateTimeOriginal = "2011:02:14 10:26:17";
ExposureMode = 0;
ExposureProgram = 2;
ExposureTime = "0.06666666666666667";
FNumber = "2.8";
Flash = 32;
FocalLength = "3.85";
ISOSpeedRatings = (
125
);
MeteringMode = 1;
PixelXDimension = 2048;
PixelYDimension = 1536;
SceneType = 1;
SensingMethod = 2;
Sharpness = 1;
ShutterSpeedValue = "3.910431673351467";
SubjectArea = (
1023,
767,
614,
614
);
WhiteBalance = 0;
};
"{TIFF}" = {
DateTime = "2011:02:14 10:26:17";
Make = Apple;
Model = "iPhone 3GS";
Software = "4.2.1";
XResolution = 72;
YResolution = 72;
};
};
UIImagePickerControllerMediaType = "public.image";
UIImagePickerControllerOriginalImage = "<UIImage: 0x40efb50>";
}
However picture returns imageOrientation == 1;
UIImage *picture = [info objectForKey:UIImagePickerControllerOriginalImage];
I just started working on this issue in my own app.
I used the UIImage category that Trevor Harmon crafted for resizing an image and fixing its orientation, UIImage+Resize.
Then you can do something like this in -imagePickerController:didFinishPickingMediaWithInfo:
UIImage *pickedImage = [info objectForKey:UIImagePickerControllerEditedImage];
UIImage *resized = [pickedImage resizedImageWithContentMode:UIViewContentModeScaleAspectFit bounds:pickedImage.size interpolationQuality:kCGInterpolationHigh];
This fixed the problem for me. The resized image is oriented correctly visually and the imageOrientation property reports UIImageOrientationUp.
There are several versions of this scale/resize/crop code out there; I used Trevor's because it seems pretty clean and includes some other UIImage manipulators that I want to use later.
This what I have found for fixing orientation issue; Works for me
UIImage *initialImage = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
NSData *data = UIImagePNGRepresentation(self.initialImage);
UIImage *tempImage = [UIImage imageWithData:data];
UIImage *fixedOrientationImage = [UIImage imageWithCGImage:tempImage.CGImage
scale:initialImage.scale
orientation:self.initialImage.imageOrientation];
initialImage = fixedOrientationImage;
Here's a Swift snippet that fixes the problem efficiently:
let orientedImage = UIImage(CGImage: initialImage.CGImage, scale: 1, orientation: initialImage.imageOrientation)!
I use the following code that I have put in a separate image utility object file that has a bunch of other editing methods for UIImages:
+ (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSizeWithSameAspectRatio:(CGSize)targetSize
{
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO) {
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor > heightFactor) {
scaleFactor = widthFactor; // scale to fit height
}
else {
scaleFactor = heightFactor; // scale to fit width
}
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the image
if (widthFactor > heightFactor) {
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else if (widthFactor < heightFactor) {
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
CGImageRef imageRef = [sourceImage CGImage];
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
if (bitmapInfo == kCGImageAlphaNone) {
bitmapInfo = kCGImageAlphaNoneSkipLast;
}
CGContextRef bitmap;
if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) {
bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
} else {
bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
}
// In the right or left cases, we need to switch scaledWidth and scaledHeight,
// and also the thumbnail point
if (sourceImage.imageOrientation == UIImageOrientationLeft) {
thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x);
CGFloat oldScaledWidth = scaledWidth;
scaledWidth = scaledHeight;
scaledHeight = oldScaledWidth;
CGContextRotateCTM (bitmap, M_PI_2); // + 90 degrees
CGContextTranslateCTM (bitmap, 0, -targetHeight);
} else if (sourceImage.imageOrientation == UIImageOrientationRight) {
thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x);
CGFloat oldScaledWidth = scaledWidth;
scaledWidth = scaledHeight;
scaledHeight = oldScaledWidth;
CGContextRotateCTM (bitmap, -M_PI_2); // - 90 degrees
CGContextTranslateCTM (bitmap, -targetWidth, 0);
} else if (sourceImage.imageOrientation == UIImageOrientationUp) {
// NOTHING
} else if (sourceImage.imageOrientation == UIImageOrientationDown) {
CGContextTranslateCTM (bitmap, targetWidth, targetHeight);
CGContextRotateCTM (bitmap, -M_PI); // - 180 degrees
}
CGContextDrawImage(bitmap, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* newImage = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return newImage;
}
And then I call
UIImage *pickedImage = [info objectForKey:UIImagePickerControllerOriginalImage];
UIImage *fixedOriginal = [ImageUtil imageWithImage:[mediaInfoDict objectForKey:UIImagePickerControllerOriginalImage] scaledToSizeWithSameAspectRatio:pickedImage.size];
In iOS 7, I needed code dependent on UIImage.imageOrientation to correct for the different orientations. Now, in iOS 8.2, when I pick my old test images from the album via UIImagePickerController, the orientation will be UIImageOrientationUp for ALL images. When I take a photo (UIImagePickerControllerSourceTypeCamera), those images will also always be upwards, regardless of the device orientation.
So between those iOS versions, there obviously has been a fix where UIImagePickerController already rotates the images if neccessary.
You can even notice that when the album images are displayed: for a split second, they will be displayed in the original orientation, before they appear in the new upward orientation.
The only thing that worked for me was to re-render the image again which forces the correct orientation.
if (photo.imageOrientation != .up) {
UIGraphicsBeginImageContextWithOptions(photo.size, false, 1.0);
let rect = CGRect(x: 0, y: 0, width: photo.size.width, height: photo.size.height);
photo.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
photo = newImage;
}

Cocoa OpenGL Texture Creation

I am working on my first OpenGL application using Cocoa (I have used OpenGL ES on the iPhone) and I am having trouble loading a texture from an image file. Here is my texture loading code:
#interface MyOpenGLView : NSOpenGLView
{
GLenum texFormat[ 1 ]; // Format of texture (GL_RGB, GL_RGBA)
NSSize texSize[ 1 ]; // Width and height
GLuint textures[1]; // Storage for one texture
}
- (BOOL) loadBitmap:(NSString *)filename intoIndex:(int)texIndex
{
BOOL success = FALSE;
NSBitmapImageRep *theImage;
int bitsPPixel, bytesPRow;
unsigned char *theImageData;
NSData* imgData = [NSData dataWithContentsOfFile:filename options:NSUncachedRead error:nil];
theImage = [NSBitmapImageRep imageRepWithData:imgData];
if( theImage != nil )
{
bitsPPixel = [theImage bitsPerPixel];
bytesPRow = [theImage bytesPerRow];
if( bitsPPixel == 24 ) // No alpha channel
texFormat[texIndex] = GL_RGB;
else if( bitsPPixel == 32 ) // There is an alpha channel
texFormat[texIndex] = GL_RGBA;
texSize[texIndex].width = [theImage pixelsWide];
texSize[texIndex].height = [theImage pixelsHigh];
if( theImageData != NULL )
{
NSLog(#"Good so far...");
success = TRUE;
// Create the texture
glGenTextures(1, &textures[texIndex]);
NSLog(#"tex: %i", textures[texIndex]);
NSLog(#"%i", glIsTexture(textures[texIndex]));
glPixelStorei(GL_UNPACK_ROW_LENGTH, [theImage pixelsWide]);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// Typical texture generation using data from the bitmap
glBindTexture(GL_TEXTURE_2D, textures[texIndex]);
NSLog(#"%i", glIsTexture(textures[texIndex]));
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texSize[texIndex].width, texSize[texIndex].height, 0, texFormat[texIndex], GL_UNSIGNED_BYTE, [theImage bitmapData]);
NSLog(#"%i", glIsTexture(textures[texIndex]));
}
}
return success;
}
It seems that the glGenTextures() function is not actually creating a texture because textures[0] remains 0. Also, logging glIsTexture(textures[texIndex]) always returns false.
Any suggestions?
Thanks,
Kyle
glGenTextures(1, &textures[texIndex] );
What is your textures definition?
glIsTexture only returns true if the texture is already ready. A name returned by glGenTextures, but not yet associated with a texture by calling glBindTexture, is not the name of a texture.
Check if the glGenTextures is by accident executed between glBegin and glEnd -- that's the only official failure reason.
Also:
Check if the texture is square and has dimensions that are a power of 2.
Although it isn't emphasized anywhere enough iPhone's OpenGL ES implementation requires them to be that way.
OK, I figured it out. It turns out that I was trying to load the textures before I set up my context. Once I put loading textures at the end of the initialization method, it worked fine.
Thanks for the answers.
Kyle

Resources