i'm handing off to this method and using the resultant CGImageRef adding it to a CGImageDestinationRef that is setup for finalizing as a kUTTypeJPEG:
- (CGImageRef)scaleImage:(CGImageRef)fullImageRef originalSize:(CGSize)originalSize scale:(CGFloat)scale;
{
CGSize imageSize = originalSize;
CGRect scaledRect = CGRectMake(0.0, 0.0, imageSize.width * scale, imageSize.height * scale);
CGColorSpaceRef colorSpace = CGImageGetColorSpace(fullImageRef);
CGContextRef context = CGBitmapContextCreate(NULL, scaledRect.size.width, scaledRect.size.height, 8, 0, colorSpace, CGImageGetAlphaInfo(fullImageRef));
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGImageRef subImage = CGImageCreateWithImageInRect(fullImageRef, scaledRect);
CGContextDrawImage(context, scaledRect, subImage); // offscreen
CGImageRef scaledImage = CGBitmapContextCreateImage(context);
CGImageRelease(subImage);
subImage = NULL;
subImage = scaledImage;
CGImageRelease(scaledImage);
CGContextFlush(context);
CGContextRelease(context);
return subImage;
}
i need to be sure that the resulting JPEG is the best possible color match for the original.
what i am unclear about is the actual role of CGImageAlphaInfo constant for my purposes.
i've read Quartz docs, and also the excellent Programming With Quartz book (by Gelphman and Laden), but i'm still not sure of my methods.
i've set the property kCGImageDestinationLossyCompressionQuality to 1.0 in the jpeg's property dictionary, along with capturing other properties.
should i be doing something more to maintain the color integrity of the original?
this is macos, and using gc.
It doesn't much matter, because JPEG doesn't support alpha (at least, not without external masking or an extension), so you can expect that CGImageDestination will throw away the alpha channel at the export stage.
I would try the original image's alpha info first, and if I can't create the bitmap context that way, I would use either kCGImageAlphaNoneSkipFirst or kCGImageAlphaNoneSkipLast. For other destination file formats, of course, I'd use one of the alpha-with-premultiplied-colors pixel formats.
This page has the full list of combinations supported by CGBitmapContext.
Related
I'm having trouble getting a rendered video's colors to match the source content's colors. I'm rendering images into a CGContext, converting the backing data into a CVPixelBuffer and appending that as a frame to an AVAssetWriterInputPixelBufferAdaptor. This causes slight color differences between the images that I'm drawing into the CGContext and the resulting video file.
It seems like there are 3 things that need to be addressed:
tell AVFoundation what colorspace the video is in.
make the AVAssetWriterInputPixelBufferAdaptor and the CVPixelBuffers I append to it match that color space.
use the same colorspace for the CGContext.
The documentation is terrible, so I'd appreciate any guidance on how to do these things or if there is something else I need to do to make the colors be preserved throughout this entire process.
Full code:
AVAssetWriter *_assetWriter;
AVAssetWriterInput *_assetInput;
AVAssetWriterInputPixelBufferAdaptor *_assetInputAdaptor;
NSDictionary *outputSettings = #{ AVVideoCodecKey :AVVideoCodecH264,
AVVideoWidthKey :#(outputWidth),
AVVideoHeightKey:#(outputHeight)};
_assetInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:outputSettings];
NSDictionary *bufferAttributes = #{å(NSString*)kCVPixelBufferPixelFormatTypeKey:#(kCVPixelFormatType_32ARGB)};
_assetInputAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:_assetInput
sourcePixelBufferAttributes:bufferAttributes];
_assetWriter = [AVAssetWriter assetWriterWithURL:aURL fileType:AVFileTypeMPEG4 error:nil];
[_assetWriter addInput:_assetInput];
[_assetWriter startWriting];
[_assetWriter startSessionAtSourceTime:kCMTimeZero];
NSInteger bytesPerRow = outputWidth * 4;
long size = bytesPerRow * outputHeight;
CGColorSpaceRef srgbSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB);
UInt8 *data = (UInt8 *)calloc(size, 1);
CGContextRef ctx = CGBitmapContextCreateWithData(data, outputWidth, outputHeight, 8, bytesPerRow, srgbSpace, kCGImageAlphaPremultipliedFirst, NULL, NULL);
// draw everything into ctx
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreateWithBytes(kCFAllocatorSystemDefault,
outputWidth, outputHeight,
k32ARGBPixelFormat,
data,
bytesPerRow,
ReleaseCVPixelBufferForCVPixelBufferCreateWithBytes,
NULL,
NULL,
&pixelBuffer);
NSDictionary *pbAttachements = #{(id)kCVImageBufferCGColorSpaceKey : (__bridge id)srgbSpace};
CVBufferSetAttachments(pixelBuffer, (__bridge CFDictionaryRef)pbAttachements, kCVAttachmentMode_ShouldPropagate);
[_assetInputAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:CMTimeMake(0, 60)];
CGColorSpaceRelease(srgbSpace);
[_assetInput markAsFinished];
[_assetWriter finishWritingWithCompletionHandler:^{}];
This is quite a confusing subject and the Apple docs really do not help all that much. I am going to describe the solution I have settled on based on using the BT.709 colorspace, I am sure someone will have an objection based on Colorimetric correctness and the weirdness of various video standards, but this is complex topic. First off, don't use kCVPixelFormatType_32ARGB as the pixel type. Always pass kCVPixelFormatType_32BGRA instead, since BGRA is the native pixel layout on both MacOSX and iPhone hardware and it BGRA is just faster. Next, when you create a CGBitmapContext to render into use the BT.709 colorspace (kCGColorSpaceITUR_709). Also, don't render into a malloc() buffer, render directly into the CoreVideo pixel buffer by creating a bitmap context over the same memory, CoreGraphics will handle the colorspace and gamma conversion from whatever your input image is to BT.709 and its associated gamma. Then you need to tell AVFoundation the colorspace of the pixel buffer, do that by making an ICC profile copy and setting the kCVImageBufferICCProfileKey on the CoreVideo pixel buffer. That takes care of your issues 1 and 2, you do not need to have input images in this same colorspace with this approach. Now, this is of course complex and actual working source code (yes actually working) is hard to come by. Here is a github link to a small project that does these exact steps, the code is BSD licensed, so feel free to use it. Note specifically the H264Encoder class which wraps all this horror up into a reusable module. You can find calling code in encode_h264.m, it is a little MacOSX command line util to encode PNG to M4V. Also attached 3 keys Apple docs related to this subject 1, 2, 3.
MetalBT709Decoder
I created an app in which I want to display text on top of google maps. I chose to use custom markers, but they can only be images, so I decided to create an image from my text utilizing SkiaSharp.
private static ImageSource CreateImageSource(string text)
{
int numberSize = 20;
int margin = 5;
SKBitmap bitmap = new SKBitmap(30, numberSize + margin * 2, SKImageInfo.PlatformColorType, SKAlphaType.Premul);
SKCanvas canvas = new SKCanvas(bitmap);
SKPaint paint = new SKPaint
{
Style = SKPaintStyle.StrokeAndFill,
TextSize = numberSize,
Color = SKColors.Red,
StrokeWidth = 1,
};
canvas.DrawText(text.ToString(), 0, numberSize, paint);
SKImage skImage = SKImage.FromBitmap(bitmap);
SKData data = skImage.Encode(SKEncodedImageFormat.Png, 100);
return ImageSource.FromStream(data.AsStream);
}
The images I create however have ugly artifacts on the top of the resulting image and my feeling is that they get worse if I create multiple images.
I built an example app, that shows the artifacts and the code I used to draw the text. It can be found here:
https://github.com/hot33331/SkiaSharpExample
How can I get rid of those artifacts. Am I using skia wrong?
I got the following answer from Matthew Leibowitz on the SkiaSharp GitHub:
The chances are you are not clearing the canvas/bitmap first.
You can either do bitmap.Erase(SKColors.Transparent) or canvas.Clear(SKColors.Transparent) (you can use any color).
The reason for this is performance. When creating a new bitmap, the computer has no way of knowing what background color you want. So, if it was to go transparent and you wanted white, then there would be two draw operations to clear the pixels (and this may be very expensive for large images).
During the allocation of the bitmap, the memory is provided, but the actual data is untouched. If there was anything there previously (which there will be), this data appears as colored pixels.
When I've seen that before, it's been because the memory passed to SkiaSharp was not zeroed. As an optimization, though, Skia assumes that the memory block passed to it is pre zeroed. Resultingly, if your first operation is a clear, it will ignore that operation, because it thinks that the state is already clean. To resolve this issue, you can manually zero the memory passed to SkiaSharp.
public static SKSurface CreateSurface(int width, int height)
{
// create a block of unmanaged native memory for use as the Skia bitmap buffer.
// unfortunately, this may not be zeroed in some circumstances.
IntPtr buff = System.Runtime.InteropServices.Marshal.AllocCoTaskMem(width * height * 4);
byte[] empty = new byte[width * height * 4];
// copy in zeroed memory.
// maybe there's a more sanctioned way to do this.
System.Runtime.InteropServices.Marshal.Copy(empty, 0, buff, width * height * 4);
// create the actual SkiaSharp surface.
var colorSpace = CGColorSpace.CreateDeviceRGB();
var bContext = new CGBitmapContext(buff, width, height, 8, width * 4, colorSpace, (CGImageAlphaInfo)bitmapInfo);
var surface = SKSurface.Create(width, height, SKColorType.Rgba8888, SKAlphaType.Premul, bitmap.Data, width * 4);
return surface;
}
Edit: btw, I assume this is a bug in SkiaSharp. The samples/apis that create the buffer for you should probably be zeroing it out. Depending on the platform it can be hard to repro as the memory alloc behaves differently. More or less likely to provide you untouched memory.
I am developing an OS X app that uses custom Core Image filters to attain a particular effect: Set one image's luminance as another image's alpha channel. There are filters to use an image as a mask for another but they require a third -background- image; I need to output an image with transparent parts, no background set.
As explained in Apple's documentation, I wrote the kernel code and tested it in QuartzComposer; it works as expected.
The kernel code is:
kernel vec4 setMask(sampler src, sampler mask)
{
vec4 color = sample(src, samplerCoord(src));
vec4 alpha = sample(mask, samplerCoord(mask));
color.a = alpha.r;
// (mask image is grayscale; any channel colour will do)
return color;
}
But when I try to use the filter from my code (either packaging it as an image unit or directly from the app source), the output image turns out to have the following 'undefined'(?) extent:
extent CGRect origin=(x=-8.988465674311579E+307, y=-8.988465674311579E+307) size=(width=1.797693134862316E+308, height=1.797693134862316E+308)
and further processing (convert to NSImage bitmap representation, write to file, etc.) fails. The filter itself loads perfectly (not nil) and the output image it produces isn't nil either, just has an invalid rect.
EDIT: Also, I copied the exported image unit (plugin), to both /Library/Graphics/Image Units and ~/Library/Graphics/Image Units, so that it appears in QuartzComposer's Patch Library, but when I connect it to the source images and Billboard renderer, nothing is drawn (transparent background).
Am I missing something?
EDIT: Looks like I assumed to much about the default behaviour of -[CIFilter apply:].
My filter subclass code's -outputImage implementation was this:
- (CIImage*) outputImage
{
CISampler* src = [CISampler samplerWithImage:inputImage];
CISampler* mask = [CISampler samplerWithImage:inputMaskImage];
return [self apply:setMaskKernel, src, mask, nil];
}
So I tried and changed it to this:
- (CIImage*) outputImage
{
CISampler* src = [CISampler samplerWithImage:inputImage];
CISampler* mask = [CISampler samplerWithImage:inputMaskImage];
CGRect extent = [inputImage extent];
NSDictionary* options = #{ kCIApplyOptionExtent: #[#(extent.origin.x),
#(extent.origin.y),
#(extent.size.width),
#(extent.size.height)],
kCIApplyOptionDefinition: #[#(extent.origin.x),
#(extent.origin.y),
#(extent.size.width),
#(extent.size.height)]
};
return [self apply:setMaskKernel arguments:#[src, mask] options:options];
}
...and now it works!
How are you drawing it? And what does your CIFilter code look like? You'll need to provide a kCIApplyOptionDefinition most likely when you call apply: in outputImage.
Alternatively, you can also change how you are drawing the image, using CIContext's drawImage:inRect:fromRect.
I have being study about OpenGL ES for iOS.
I wonder that data of YUV format is can display without converting RGB.
In most, the yuv data have to convert RGB for display. But, converting process is very slow, Then, that is not display smoothly.
So, I would like to try to dispaly YUV data without convert to RGB.
Is it possible? If possible, what can I do?
Please, let me give a advice.
I think it is not possible in OpenGL ES to display YUV data without convert to RGB data.
You can do this very easily using OpenGL ES 2.0 shaders. I use this technique for my super-fast iOS camera app SnappyCam. The fragment shader would perform the matrix multiplication to take you from YCbCr ("YUV") to RGB. You can have each {Y, Cb, Cr} channel in a separate GL_LUMINANCE texture, or combine the {Cb, Cr} textures together using a GL_LUMINANCE_ALPHA texture if your chrominance data is already interleaved (Apple call this a bi-planar format).
See my related answer to the question YUV to RGBA on Apple A4, should I use shaders or NEON? here on StackOverflow.
You may also do this using the fixed rendering pipeline of ES 1.1, but I haven't tried it. I would look toward the the texture blending functions, e.g. given in this OpenGL Texture Combiners Wiki Page.
Is you are looking solution for IOS, iPhone application then there are solution for that.
This is way convert CMSampleBufferRef to UIImage when video pixel type is set as kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
-(UIImage *) imageFromSamplePlanerPixelBuffer:(CMSampleBufferRef) sampleBuffer{
#autoreleasepool {
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the plane pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
// Get the number of bytes per row for the plane pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer,0);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent gray color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGImageAlphaNone);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
}
If you are looking for Mobile devices then i can provide others to.
I'm not sure I understand how to free a bitmap context.
I'm doing the following:
CGContextRef context = CGBitmapContextCreate(nil, size.width, size.height, 8, 0, CGColorSpaceCreateDeviceRGB(), kCGBitmapAlphaInfoMask);
.
. // (All straightforward code)
.
CGContextRelease(context);
Xcode Analyze still gives me a "potential memory leak" on the CGBitmapContextCreate line.
What am I doing wrong?
As you don't assign the result of CGColorSpaceCreateDeviceRGB() to a variable, you loose reference to the object created by that method.
You need that reference later to release the colorspace object. Core Graphics follows the Core Foundation memory management rules.
You can find more about that here.
Fixed version of your code:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(nil, size.width, size.height, 8, 0, colorSpace, kCGBitmapAlphaInfoMask);
.
. // (All straightforward code)
.
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
If you click the blue icon that the code analyzer places in your source, you get an arrow graph that shows you the origin of the leak. (I assume it would point to the line where you create the color space)
You are leaking the color space object from CGColorSpaceCreateDeviceRGB() call. You need to release the color space too.