How to Convert CIImage to CMSampleBufferRef - uiimage

I am using AVCapturesession to record video I am reciving out at didrecive sample buffer method in the form of CMSamplbufferRef.I Have Converted that to CIImage and made some some change and now I Want to convert that CIImage to CMSampleBufferRef Can Any one please help me,
I Broused a lot but I didn't found answer to this any where I hope I'll found it here.

I take no credit, credit goes to :Crop CMSampleBufferRef
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(kCFAllocatorSystemDefault, 640, 480, kCVPixelFormatType_32BGRA, NULL, &pixelBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
CIContext * ciContext = [CIContext contextWithOptions: nil];
[ciContext render:ciImage toCVPixelBuffer:pixelBuffer];
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CMSampleTimingInfo sampleTime = {
.duration = CMSampleBufferGetDuration(sampleBuffer),
.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer),
.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
};
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, &videoInfo);
CMSampleBufferRef oBuf;
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, true, NULL, NULL, videoInfo, &sampleTime, &oBuf);

Related

Swift 3 Colour Space macOS not IOS

How can I convert an RGB image into its grayscaled colour space? I can find a lot of code for iOS but non for macOS.. And the Apple's documentations are all in objective C....
let width = image.size.width
let height = image.size.height
let imageRect = NSMakeRect(0, 0, width, height);
let colorSpace = CGColorSpaceCreateDeviceGray();
let bits = image.representations.first as! NSBitmapImageRep;
bitmap!.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: nil)
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue);
let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue);
context.draw(image.cgImage!, in : imageRect);// and this line is wrong obviously..
This is what I have got so far..just copy and pasting from the internet.. but I have no idea on how to go further...
I have found an interesting way to do this.. My code are simply copied from the three sources below.
how to create grayscale image from nsimage in swift?
Greyscale Image using COCOA and NSImage
Changing the Color Space of NSImage: The second reply
My Code:
func saveImage(image:NSImage, destination:URL) throws{
let rep = greyScale(image);
var data = rep.representation(using: NSJPEGFileType, properties: [:]);
try data?.write(to: destination);
}
// rgb2gray
func greyScale(image: NSImage) -> NSBitmapImageRep{
let w = image.size.width
let h = image.size.height
let imageRect : NSRect! = NSMakeRect(0,0, w, h);
let colourSpace : ColourSpace! = CGColorSpaceCreateDeviceGray();
let context : CGContext! = CGContext(data: nil, width: Int(w),
height: Int(h), bitsPerComponent: 8,
bytesPerRow: 0, space: colorSpace,
bitmapInfo: CGImageAlphaInfo.none.rawValue);
context.draw(nsImageToCGImage(image: image), in: imageRect);
let greyImage : CGImage! = context.makeImage();
return NSBitmapImageRep(cgImage: greyImage);
}
func nsImageToCGImage(image: NSImage) -> CGImage{
if let imageData = image.tiffRepresentation as NSData! {
let imageSource : CGImageSource! = CGImageSourceCreateWithData(imageData,
nil);
let image = CGImageSourceCreateImageAtIndex(imageSource, 0, nil);
return image;
}
return nil;
}
I am still trying to understand the principle behind.
You can try CIFilter. The annoyance is that you have to convert back and forth between NSImage and CIImage:
import Cocoa
import CoreImage
let url = Bundle.main.url(forResource: "image", withExtension: "jpg")!
let image = CIImage(contentsOf: url)!
let bwFilter = CIFilter(name: "CIColorControls", withInputParameters: ["inputImage": image, "inputSaturation": 0.0])!
if let ciImage = bwFilter.outputImage {
let rep = NSCIImageRep(ciImage: ciImage)
let nsImage = NSImage(size: rep.size)
nsImage.addRepresentation(rep)
// nsImage is now your black-and-white image
}

Add To Cart Animation in Swift

I am using the code below to perform "add to cart animation",
I recently build a new app using swift and I'm having a hard time to convert this code from Objective C to Swift.
this code is animating a UITableView button To Jump into Cart(UItabBar Item)
// AddToCart button (cell Button)
-(void)AddToCart:(UIButton*)sender {
// get the selected index
CGPoint center= sender.center;
CGPoint rootViewPoint = [sender.superview convertPoint:center toView:self.Tableview];
NSIndexPath *indexPath = [self.Tableview indexPathForRowAtPoint:rootViewPoint];
// add to cart
[checkoutCart AddItem:SandwichArray[indexPath.row]];
MyCell* cell =(MyCell*)[self.Tableview dequeueReusableCellWithIdentifier:#"Cell"];
// grab the imageview
UIImageView *imgV = (UIImageView*)[cell viewWithTag:400];
// get the exact location of image
CGRect rect = [imgV.superview convertRect:imgV.frame fromView:nil];
rect = CGRectMake(5, (rect.origin.y*-1)-10, imgV.frame.size.width, imgV.frame.size.height);
// create new duplicate image
UIImageView *starView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"AddItem.png"]];
[starView setFrame:rect];
starView.layer.cornerRadius=5;
starView.layer.borderColor=[[UIColor blackColor]CGColor];
starView.layer.borderWidth=1;
[self.view addSubview:starView];
// apply position animation
CAKeyframeAnimation *pathAnimation = [CAKeyframeAnimation animationWithKeyPath:#"position"];
pathAnimation.calculationMode = kCAAnimationPaced;
pathAnimation.fillMode = kCAFillModeForwards;
pathAnimation.removedOnCompletion = NO;
pathAnimation.duration=0.75;
pathAnimation.delegate=self;
// tabbar Position
CGPoint endPoint = CGPointMake(210+rect.size.width/2, 390+rect.size.height/2);
CGMutablePathRef curvedPath = CGPathCreateMutable();
CGPathMoveToPoint(curvedPath, NULL, starView.frame.origin.x, starView.frame.origin.y);
CGPathAddCurveToPoint(curvedPath, NULL, endPoint.x, starView.frame.origin.y, endPoint.x, starView.frame.origin.y, endPoint.x, endPoint.y);
pathAnimation.path = curvedPath;
CGPathRelease(curvedPath);
// apply transform animation
CABasicAnimation *basic=[CABasicAnimation animationWithKeyPath:#"transform"];
[basic setToValue:[NSValue valueWithCATransform3D:CATransform3DMakeScale(0.25, 0.25, 0.25)]];
[basic setAutoreverses:NO];
[basic setDuration:0.75];
[starView.layer addAnimation:pathAnimation forKey:#"curveAnimation"];
[starView.layer addAnimation:basic forKey:#"transform"];
[starView performSelector:#selector(removeFromSuperview) withObject:nil afterDelay:0.75];
[self performSelector:#selector(reloadBadgeNumber) withObject:nil afterDelay:0.75];
}
This is my swift code
//AddToCart button (of cell)
func AddToCart(sender:UIButton){
// get the selected index
var center:CGPoint = sender.center;
var rootViewPoint:CGPoint = sender.superview!.convertPoint(center, toView:self.TableView)
var indexPath:NSIndexPath = self.TableView!.indexPathForRowAtPoint(rootViewPoint)!
// add to cart
//ShopingCart.AddItem(item)
var cell:Menu_Cell = self.TableView!.dequeueReusableCellWithIdentifier("cell") as Menu_Cell
//grab the imageview using cell
var imgV:UIImageView = cell.imageView!
// get the exact location of image
var rect:CGRect = imgV.superview!.convertRect(imgV.frame ,fromView:nil)
rect = CGRectMake(5, (rect.origin.y*(-1))-10, imgV.frame.size.width, imgV.frame.size.height);
// create new duplicate image
var starView:UIImageView = cell.imageView!
starView.frame = rect
starView.layer.cornerRadius=5;
starView.layer.borderWidth=1;
self.view.addSubview(starView)
// position animation
// var pathAnimation:CAKeyframeAnimation = CAKeyframeAnimation.animationWithKeyPath("position")
var pathAnimation:CAPropertyAnimation = CAPropertyAnimation(keyPath: "position")
// pathAnimation.calculationMode = kCAAnimationPaced
pathAnimation.fillMode = kCAFillModeForwards
pathAnimation.removedOnCompletion = false
pathAnimation.duration=0.75
pathAnimation.delegate=self
// tab-bar right side item frame-point = end point
var endPoint:CGPoint = CGPointMake(210+rect.size.width/2, 390+rect.size.height/2);
// animation position animation
var curvedPath:CGMutablePathRef = CGPathCreateMutable();
CGPathMoveToPoint(curvedPath, nil, starView.frame.origin.x, starView.frame.origin.y);
CGPathAddCurveToPoint(curvedPath, nil, endPoint.x, starView.frame.origin.y, endPoint.x, starView.frame.origin.y, endPoint.x, endPoint.y);
// pathAnimation.path = curvedPath;
// apply transform animation
// var basic:CABasicAnimation = CABasicAnimation.animationWithKeyPath("transform")
var basic:CAPropertyAnimation = CAPropertyAnimation(keyPath: "transform")
// basic.valueForKeyPath(NSValue.valueWithCATransform3D(CATransform3DMakeScale(0.25, 0.25, 0.25)))
// basic.setAutoreverses(false)
basic.duration = 0.75
starView.layer.addAnimation(pathAnimation,forKey: "curveAnimation")
starView.layer.addAnimation(basic,forKey:"transform")
starView.removeFromSuperview()
// [self performSelector:#selector(reloadBadgeNumber) withObject:nil afterDelay:0.75];
i am getting Error here:
starView.layer.addAnimation(pathAnimation,forKey: "curveAnimation")
tarView.layer.addAnimation(basic,forKey:"transform")
**'-[CAPropertyAnimation _copyRenderAnimationForLayer:]: unrecognized selector sent to instance 0x7fd612c11780'**
any suggestions ?
import QuartzCore is not proper answer. Make sure your key value for animation in layer is given proper. I also facing same and changed to CAKeyframeAnimation. My task is differ from you.
That error appears, if there is the import of the QuartzCore-header missing in your code. So you need to import the QuartzCore-framework:
import QuartzCore
var center:CGPoint = sender.center;
var rootViewPoint:CGPoint = sender.superview!.convertPoint(center, toView:self.tableView)
var indexPath:NSIndexPath = self.tableView!.indexPathForRowAtPoint(rootViewPoint)!
var cell:Cell_3 = self.tableView.dequeueReusableCellWithIdentifier("Cell", forIndexPath: indexPath) as! Cell_3
var imgV:UITextField = cell.tf_adet!
// get the exact location of image
var rect:CGRect = imgV.superview!.convertRect(imgV.frame ,fromView:nil)
rect = CGRectMake(rect.origin.x, (rect.origin.y*(-1))-10, imgV.frame.size.width, imgV.frame.size.height);
// create new duplicate image
var starView:UITextField = cell.tf_adet
starView.frame = rect
starView.layer.cornerRadius=5;
starView.layer.borderWidth=1;
self.view.addSubview(starView)
// now create a bezier path that defines our curve
// the animation function needs the curve defined as a CGPath
// but these are more difficult to work with, so instead
// we'll create a UIBezierPath, and then create a
// CGPath from the bezier when we need it
let path = UIBezierPath()
// tab-bar right side item frame-point = end point
var endPoint:CGPoint = CGPointMake(140+rect.size.width/2, 790+rect.size.height/2);
path.moveToPoint(CGPointMake(starView.frame.origin.x, starView.frame.origin.y))
path.addCurveToPoint(CGPoint(x: endPoint.x, y: endPoint.y),
controlPoint1: CGPoint(x: endPoint.x, y: starView.frame.origin.y),
controlPoint2: CGPoint(x: endPoint.x, y: starView.frame.origin.y ))
// create a new CAKeyframeAnimation that animates the objects position
let anim = CAKeyframeAnimation(keyPath: "position")
// set the animations path to our bezier curve
anim.path = path.CGPath
// set some more parameters for the animation
// this rotation mode means that our object will rotate so that it's parallel to whatever point it is currently on the curve
// anim.rotationMode = kCAFillModeForwards
anim.fillMode = kCAFillModeForwards
//anim.repeatCount = Float.infinity
anim.duration = 0.65
anim.removedOnCompletion = false
anim.delegate=self
// apply transform animation
var animation : CABasicAnimation = CABasicAnimation(keyPath: "transform");
var transform : CATransform3D = CATransform3DMakeScale(2,2,1 ) //0.25, 0.25, 0.25);
//animation.setValue(NSValue(CATransform3D: transform), forKey: "scaleText");
animation.duration = 0.75;
starView.layer.addAnimation(anim, forKey: "curveAnimation")
starView.layer.addAnimation(animation, forKey: "transform");

Confused about NSImageView scaling

I'm trying to display a simple NSImageView with it's image centered without scaling it like this:
Just like iOS does when you set an UIView's contentMode = UIViewContentModeCenter
So I tried all NSImageScaling values, this is what I get when I chose NSScaleNone
I really don't understand what's going on :-/
You can manually generate the image of the correct size and content, and set it to be the image of the NSImageView so that NSImageView doesn't need to do anything.
NSImage *newImg = [self resizeImage:sourceImage size:newSize];
[aNSImageView setImage:newImg];
The following function resizes an image to fit the new size, keeping the aspect ratio intact. If the image is smaller than the new size, it is scaled up and filled with the new frame. If the image is larger than the new size, it is downsized, and filled with the new frame
- (NSImage*) resizeImage:(NSImage*)sourceImage size:(NSSize)size{
NSRect targetFrame = NSMakeRect(0, 0, size.width, size.height);
NSImage* targetImage = [[NSImage alloc] initWithSize:size];
NSSize sourceSize = [sourceImage size];
float ratioH = size.height/ sourceSize.height;
float ratioW = size.width / sourceSize.width;
NSRect cropRect = NSZeroRect;
if (ratioH >= ratioW) {
cropRect.size.width = floor (size.width / ratioH);
cropRect.size.height = sourceSize.height;
} else {
cropRect.size.width = sourceSize.width;
cropRect.size.height = floor(size.height / ratioW);
}
cropRect.origin.x = floor( (sourceSize.width - cropRect.size.width)/2 );
cropRect.origin.y = floor( (sourceSize.height - cropRect.size.height)/2 );
[targetImage lockFocus];
[sourceImage drawInRect:targetFrame
fromRect:cropRect //portion of source image to draw
operation:NSCompositeCopy //compositing operation
fraction:1.0 //alpha (transparency) value
respectFlipped:YES //coordinate system
hints:#{NSImageHintInterpolation:
[NSNumber numberWithInt:NSImageInterpolationLow]}];
[targetImage unlockFocus];
return targetImage;}
Here's an awesome category for NSImage: NSImage+ContentMode
It allows content modes like in iOS, works great.
Set image scaling property to NSImageScaleAxesIndependently which will scale image to fill rectangle.This will not preserve aspect ratio.
Swift version of #Shagru's answer (without the hints)
func resizeImage(_ sourceImage:NSImage, size:CGSize) -> NSImage
{
let targetFrame = CGRect(origin: CGPoint.zero, size: size);
let targetImage = NSImage.init(size: size)
let sourceSize = sourceImage.size
let ratioH = size.height / sourceSize.height;
let ratioW = size.width / sourceSize.width;
var cropRect = CGRect.zero;
if (ratioH >= ratioW) {
cropRect.size.width = floor (size.width / ratioH);
cropRect.size.height = sourceSize.height;
} else {
cropRect.size.width = sourceSize.width;
cropRect.size.height = floor(size.height / ratioW);
}
cropRect.origin.x = floor( (sourceSize.width - cropRect.size.width)/2 );
cropRect.origin.y = floor( (sourceSize.height - cropRect.size.height)/2 );
targetImage.lockFocus()
sourceImage.draw(in: targetFrame, from: cropRect, operation: .copy, fraction: 1.0, respectFlipped: true, hints: nil )
targetImage.unlockFocus()
return targetImage;
}

UIImagePickerController returning incorrect image orientation

I'm using UIImagePickerController to capture an image and then store it. However, when i try to rescale it, the orientation value i get out of this image is incorrect. When i take a snap by holding the phone Up, it gives me orientation of Left. Has anyone experienced this issue?
The UIImagePickerController dictionary shows following information:
{
UIImagePickerControllerMediaMetadata = {
DPIHeight = 72;
DPIWidth = 72;
Orientation = 3;
"{Exif}" = {
ApertureValue = "2.970853654340484";
ColorSpace = 1;
DateTimeDigitized = "2011:02:14 10:26:17";
DateTimeOriginal = "2011:02:14 10:26:17";
ExposureMode = 0;
ExposureProgram = 2;
ExposureTime = "0.06666666666666667";
FNumber = "2.8";
Flash = 32;
FocalLength = "3.85";
ISOSpeedRatings = (
125
);
MeteringMode = 1;
PixelXDimension = 2048;
PixelYDimension = 1536;
SceneType = 1;
SensingMethod = 2;
Sharpness = 1;
ShutterSpeedValue = "3.910431673351467";
SubjectArea = (
1023,
767,
614,
614
);
WhiteBalance = 0;
};
"{TIFF}" = {
DateTime = "2011:02:14 10:26:17";
Make = Apple;
Model = "iPhone 3GS";
Software = "4.2.1";
XResolution = 72;
YResolution = 72;
};
};
UIImagePickerControllerMediaType = "public.image";
UIImagePickerControllerOriginalImage = "<UIImage: 0x40efb50>";
}
However picture returns imageOrientation == 1;
UIImage *picture = [info objectForKey:UIImagePickerControllerOriginalImage];
I just started working on this issue in my own app.
I used the UIImage category that Trevor Harmon crafted for resizing an image and fixing its orientation, UIImage+Resize.
Then you can do something like this in -imagePickerController:didFinishPickingMediaWithInfo:
UIImage *pickedImage = [info objectForKey:UIImagePickerControllerEditedImage];
UIImage *resized = [pickedImage resizedImageWithContentMode:UIViewContentModeScaleAspectFit bounds:pickedImage.size interpolationQuality:kCGInterpolationHigh];
This fixed the problem for me. The resized image is oriented correctly visually and the imageOrientation property reports UIImageOrientationUp.
There are several versions of this scale/resize/crop code out there; I used Trevor's because it seems pretty clean and includes some other UIImage manipulators that I want to use later.
This what I have found for fixing orientation issue; Works for me
UIImage *initialImage = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
NSData *data = UIImagePNGRepresentation(self.initialImage);
UIImage *tempImage = [UIImage imageWithData:data];
UIImage *fixedOrientationImage = [UIImage imageWithCGImage:tempImage.CGImage
scale:initialImage.scale
orientation:self.initialImage.imageOrientation];
initialImage = fixedOrientationImage;
Here's a Swift snippet that fixes the problem efficiently:
let orientedImage = UIImage(CGImage: initialImage.CGImage, scale: 1, orientation: initialImage.imageOrientation)!
I use the following code that I have put in a separate image utility object file that has a bunch of other editing methods for UIImages:
+ (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSizeWithSameAspectRatio:(CGSize)targetSize
{
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO) {
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor > heightFactor) {
scaleFactor = widthFactor; // scale to fit height
}
else {
scaleFactor = heightFactor; // scale to fit width
}
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the image
if (widthFactor > heightFactor) {
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else if (widthFactor < heightFactor) {
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
CGImageRef imageRef = [sourceImage CGImage];
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
if (bitmapInfo == kCGImageAlphaNone) {
bitmapInfo = kCGImageAlphaNoneSkipLast;
}
CGContextRef bitmap;
if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) {
bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
} else {
bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
}
// In the right or left cases, we need to switch scaledWidth and scaledHeight,
// and also the thumbnail point
if (sourceImage.imageOrientation == UIImageOrientationLeft) {
thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x);
CGFloat oldScaledWidth = scaledWidth;
scaledWidth = scaledHeight;
scaledHeight = oldScaledWidth;
CGContextRotateCTM (bitmap, M_PI_2); // + 90 degrees
CGContextTranslateCTM (bitmap, 0, -targetHeight);
} else if (sourceImage.imageOrientation == UIImageOrientationRight) {
thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x);
CGFloat oldScaledWidth = scaledWidth;
scaledWidth = scaledHeight;
scaledHeight = oldScaledWidth;
CGContextRotateCTM (bitmap, -M_PI_2); // - 90 degrees
CGContextTranslateCTM (bitmap, -targetWidth, 0);
} else if (sourceImage.imageOrientation == UIImageOrientationUp) {
// NOTHING
} else if (sourceImage.imageOrientation == UIImageOrientationDown) {
CGContextTranslateCTM (bitmap, targetWidth, targetHeight);
CGContextRotateCTM (bitmap, -M_PI); // - 180 degrees
}
CGContextDrawImage(bitmap, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* newImage = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return newImage;
}
And then I call
UIImage *pickedImage = [info objectForKey:UIImagePickerControllerOriginalImage];
UIImage *fixedOriginal = [ImageUtil imageWithImage:[mediaInfoDict objectForKey:UIImagePickerControllerOriginalImage] scaledToSizeWithSameAspectRatio:pickedImage.size];
In iOS 7, I needed code dependent on UIImage.imageOrientation to correct for the different orientations. Now, in iOS 8.2, when I pick my old test images from the album via UIImagePickerController, the orientation will be UIImageOrientationUp for ALL images. When I take a photo (UIImagePickerControllerSourceTypeCamera), those images will also always be upwards, regardless of the device orientation.
So between those iOS versions, there obviously has been a fix where UIImagePickerController already rotates the images if neccessary.
You can even notice that when the album images are displayed: for a split second, they will be displayed in the original orientation, before they appear in the new upward orientation.
The only thing that worked for me was to re-render the image again which forces the correct orientation.
if (photo.imageOrientation != .up) {
UIGraphicsBeginImageContextWithOptions(photo.size, false, 1.0);
let rect = CGRect(x: 0, y: 0, width: photo.size.width, height: photo.size.height);
photo.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
photo = newImage;
}

Cocoa OpenGL Texture Creation

I am working on my first OpenGL application using Cocoa (I have used OpenGL ES on the iPhone) and I am having trouble loading a texture from an image file. Here is my texture loading code:
#interface MyOpenGLView : NSOpenGLView
{
GLenum texFormat[ 1 ]; // Format of texture (GL_RGB, GL_RGBA)
NSSize texSize[ 1 ]; // Width and height
GLuint textures[1]; // Storage for one texture
}
- (BOOL) loadBitmap:(NSString *)filename intoIndex:(int)texIndex
{
BOOL success = FALSE;
NSBitmapImageRep *theImage;
int bitsPPixel, bytesPRow;
unsigned char *theImageData;
NSData* imgData = [NSData dataWithContentsOfFile:filename options:NSUncachedRead error:nil];
theImage = [NSBitmapImageRep imageRepWithData:imgData];
if( theImage != nil )
{
bitsPPixel = [theImage bitsPerPixel];
bytesPRow = [theImage bytesPerRow];
if( bitsPPixel == 24 ) // No alpha channel
texFormat[texIndex] = GL_RGB;
else if( bitsPPixel == 32 ) // There is an alpha channel
texFormat[texIndex] = GL_RGBA;
texSize[texIndex].width = [theImage pixelsWide];
texSize[texIndex].height = [theImage pixelsHigh];
if( theImageData != NULL )
{
NSLog(#"Good so far...");
success = TRUE;
// Create the texture
glGenTextures(1, &textures[texIndex]);
NSLog(#"tex: %i", textures[texIndex]);
NSLog(#"%i", glIsTexture(textures[texIndex]));
glPixelStorei(GL_UNPACK_ROW_LENGTH, [theImage pixelsWide]);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// Typical texture generation using data from the bitmap
glBindTexture(GL_TEXTURE_2D, textures[texIndex]);
NSLog(#"%i", glIsTexture(textures[texIndex]));
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texSize[texIndex].width, texSize[texIndex].height, 0, texFormat[texIndex], GL_UNSIGNED_BYTE, [theImage bitmapData]);
NSLog(#"%i", glIsTexture(textures[texIndex]));
}
}
return success;
}
It seems that the glGenTextures() function is not actually creating a texture because textures[0] remains 0. Also, logging glIsTexture(textures[texIndex]) always returns false.
Any suggestions?
Thanks,
Kyle
glGenTextures(1, &textures[texIndex] );
What is your textures definition?
glIsTexture only returns true if the texture is already ready. A name returned by glGenTextures, but not yet associated with a texture by calling glBindTexture, is not the name of a texture.
Check if the glGenTextures is by accident executed between glBegin and glEnd -- that's the only official failure reason.
Also:
Check if the texture is square and has dimensions that are a power of 2.
Although it isn't emphasized anywhere enough iPhone's OpenGL ES implementation requires them to be that way.
OK, I figured it out. It turns out that I was trying to load the textures before I set up my context. Once I put loading textures at the end of the initialization method, it worked fine.
Thanks for the answers.
Kyle

Resources