Why does QLPreviewRequestSetDataRepresentation on Mavericks return error " CGImageCreate: invalid image size: 0 x 0" for png - osx-mavericks

My quick look generator used to work properly but is now broken.
Is it a bug or am I doing something wrong?
Here’s my code:
OSStatus GeneratePreviewForURL(void *thisInterface, QLPreviewRequestRef preview,
CFURLRef url, CFStringRef contentTypeUTI,
CFDictionaryRef options) {
NSDictionary * myDoc = [NSDictionary dictionaryWithContentsOfURL:(NSURL *)url];
if (myDoc) {
NSData * pngData = [myDoc valueForKey:#"pngPreview"];
if (pngData) {
QLPreviewRequestSetDataRepresentation(preview,(__bridge CFDataRef)pngData,
kUTTypeImage,NULL);
}
}
}
My doc is a normal plist with a png preview stored as data in it.
I checked that pngPreview does contain png data, I created the image and its size was 350×350.
However, I’m constantly getting these errors:
qlmanage[702] : CGImageCreate: invalid image size: 0 x 0.
qlmanage[702:303] *** CFMessagePort: bootstrap_register(): failed 1100 (0x44c) 'Permission denied', port = 0x9e27, name = 'com.apple.tsm.portname'
See /usr/include/servers/bootstrap_defs.h for the error codes.
qlmanage[702:303] *** CFMessagePort: bootstrap_register(): failed 1100 (0x44c) 'Permission denied', port = 0x3f2b, name = 'com.apple.CFPasteboardClient'
See /usr/include/servers/bootstrap_defs.h for the error codes.
qlmanage[702:303] Failed to allocate communication port for com.apple.CFPasteboardClient; this is likely due to sandbox restrictions
My app is not sandboxed so I don’t think the last 3 errors are important.
I used to use kUTTypePNG but have tried kUTTypeImage to no avail (the docs for QLPreviewRequestSetDataRepresentation says currently supported UTIs are kUTTypeImage, kUTTypePDF, kUTTypeHTML, kUTTypeXML, kUTTypePlainText, kUTTypeRTF, kUTTypeMovie, and kUTTypeAudio).
Other points to consider:
The docs state:
"The binary of a Quick Look generator must be universal and must be 32-bit only." This page
But this page states:
"For OS X v10.6 and later, you must build Quick Look generators for both 32- and 64-bit." Which is rather unclear...
How do I set my target?

Facing the same problem I've decided to go an alternate route: use QLPreviewRequestCreateContext to get a context in which to draw my image in:
QLPreviewRequestRef preview; // The preview request passed to GeneratePreviewForURL()
CGImageRef image; // Create your CGImage however you like
CGSize size = CGSizeMake(CGImageGetWidth(image), CGImageGetHeight(image));
CGContextRef ctxt = QLPreviewRequestCreateContext(preview, size, YES, nil);
CGContextDrawImage(ctxt, CGRectMake(0, 0, size.width, size.height), image);
QLPreviewRequestFlushContext(preview, ctxt);
CGContextRelease(ctxt);
At least that works...

Related

vImage on the Mac: unable to convert from Core Video

Edit: Fixed. A working sample is at https://github.com/halmueller/vImage-mac-sample.
I'm trying to read the feed of a MacBook Pro's Facetime camera to process it with the vImage framework. I'm following the example in Apple's VideoCaptureSample, which is written for iOS.
I'm getting hung up on creating the vImageConverter, which creates an image buffer that vImage can use. My call to vImageConverter_CreateForCVToCGImageFormat() fails, with the console error "insufficient information in srcCVFormat to decode image. vImageCVImageFormatError = -21601".
The same call works on iOS. But the image formats are different on iOS and macOS. On iOS, the vImageConverter constructor is able to infer the format information, but on macOS, it can't.
Here's my setup code:
func displayEqualizedPixelBuffer(pixelBuffer: CVPixelBuffer) {
var error = kvImageNoError
if converter == nil {
let cvImageFormat = vImageCVImageFormat_CreateWithCVPixelBuffer(pixelBuffer).takeRetainedValue()
let deviceRGBSpace = CGColorSpaceCreateDeviceRGB()
let dcip3SolorSpace = CGColorSpace(name: CGColorSpace.dcip3)
vImageCVImageFormat_SetColorSpace(cvImageFormat,
deviceRGBSpace)
print(cvImageFormat)
if let unmanagedConverter = vImageConverter_CreateForCVToCGImageFormat(
cvImageFormat,
&cgImageFormat,
nil,
vImage_Flags(kvImagePrintDiagnosticsToConsole),
&error) {
guard error == kvImageNoError else {
return
}
converter = unmanagedConverter.takeRetainedValue()
} else {
return
}
}
When I run on iOS, I see in the console:
vImageCVFormatRef 0x101e12210:
type: '420f'
matrix:
0.29899999499321 0.58700001239777 0.11400000005960
-0.16873589158058 -0.33126410841942 0.50000000000000
0.50000000000000 -0.41868758201599 -0.08131241053343
chroma location: <RGB Base colorspace missing>
RGB base colorspace: =Bo
On macOS, though, the call to vImageConverter_CreateForCVToCGImageFormat returns nil, and I see:
vImageCVFormatRef 0x10133a270:
type: '2vuy'
matrix:
0.29899999499321 0.58700001239777 0.11400000005960
-0.16873589158058 -0.33126410841942 0.50000000000000
0.50000000000000 -0.41868758201599 -0.08131241053343
chroma location: <RGB Base colorspace missing>
RGB base colorspace: Рü
2018-03-13... kvImagePrintDiagnosticsToConsole: vImageConverter_CreateForCVToCGImageFormat error:
insufficient information in srcCVFormat to decode image. vImageCVImageFormatError = -21601
Note that the image type (4 letter code) is different, as is the RGB base colorspace. I've tried on the Mac using dcip3ColorSpace instead of deviceRGB, and the results are the same.
What am I missing to get this vImageConverter created?
The -21601 error code means that the source CV format is missing chroma siting information (see http://dougkerr.net/Pumpkin/articles/Subsampling.pdf for a nice background of chroma siting). You can fix this by explicitly setting it with vImageCVImageFormat_SetChromaSiting. So, immediately after setting the format's color space, and before creating the converter (i.e. where you have print(cvImageFormat)), add the following:
vImageCVImageFormat_SetChromaSiting(cvImageFormat,
kCVImageBufferChromaLocation_Center)
Cheers!
simon
So, while the answer about calling setting the chorma property on the vImage format does work, there is a better way to do that. Just set the property on the core video pixel buffer and then when you call vImageCVImageFormat_CreateWithCVPixelBuffer() it will just work, like so:
NSDictionary *pbAttachments = #{
(__bridge NSString*)kCVImageBufferChromaLocationTopFieldKey:(__bridge NSString*)kCVImageBufferChromaLocation_Center,
(__bridge NSString*)kCVImageBufferAlphaChannelIsOpaque: (id)kCFBooleanTrue,
};
CVBufferRef pixelBuffer = cvPixelBuffer;
CVBufferSetAttachments(pixelBuffer, (__bridge CFDictionaryRef)pbAttachments, kCVAttachmentMode_ShouldPropagate);
For extra points, you can also set the colorspace ref with the kCVImageBufferICCProfileKey and the CGColorSpaceCopyICCData() API.

QLPreviewRequestSetDataRepresentation seems not to work when passing an image [duplicate]

My quick look generator used to work properly but is now broken.
Is it a bug or am I doing something wrong?
Here’s my code:
OSStatus GeneratePreviewForURL(void *thisInterface, QLPreviewRequestRef preview,
CFURLRef url, CFStringRef contentTypeUTI,
CFDictionaryRef options) {
NSDictionary * myDoc = [NSDictionary dictionaryWithContentsOfURL:(NSURL *)url];
if (myDoc) {
NSData * pngData = [myDoc valueForKey:#"pngPreview"];
if (pngData) {
QLPreviewRequestSetDataRepresentation(preview,(__bridge CFDataRef)pngData,
kUTTypeImage,NULL);
}
}
}
My doc is a normal plist with a png preview stored as data in it.
I checked that pngPreview does contain png data, I created the image and its size was 350×350.
However, I’m constantly getting these errors:
qlmanage[702] : CGImageCreate: invalid image size: 0 x 0.
qlmanage[702:303] *** CFMessagePort: bootstrap_register(): failed 1100 (0x44c) 'Permission denied', port = 0x9e27, name = 'com.apple.tsm.portname'
See /usr/include/servers/bootstrap_defs.h for the error codes.
qlmanage[702:303] *** CFMessagePort: bootstrap_register(): failed 1100 (0x44c) 'Permission denied', port = 0x3f2b, name = 'com.apple.CFPasteboardClient'
See /usr/include/servers/bootstrap_defs.h for the error codes.
qlmanage[702:303] Failed to allocate communication port for com.apple.CFPasteboardClient; this is likely due to sandbox restrictions
My app is not sandboxed so I don’t think the last 3 errors are important.
I used to use kUTTypePNG but have tried kUTTypeImage to no avail (the docs for QLPreviewRequestSetDataRepresentation says currently supported UTIs are kUTTypeImage, kUTTypePDF, kUTTypeHTML, kUTTypeXML, kUTTypePlainText, kUTTypeRTF, kUTTypeMovie, and kUTTypeAudio).
Other points to consider:
The docs state:
"The binary of a Quick Look generator must be universal and must be 32-bit only." This page
But this page states:
"For OS X v10.6 and later, you must build Quick Look generators for both 32- and 64-bit." Which is rather unclear...
How do I set my target?
Facing the same problem I've decided to go an alternate route: use QLPreviewRequestCreateContext to get a context in which to draw my image in:
QLPreviewRequestRef preview; // The preview request passed to GeneratePreviewForURL()
CGImageRef image; // Create your CGImage however you like
CGSize size = CGSizeMake(CGImageGetWidth(image), CGImageGetHeight(image));
CGContextRef ctxt = QLPreviewRequestCreateContext(preview, size, YES, nil);
CGContextDrawImage(ctxt, CGRectMake(0, 0, size.width, size.height), image);
QLPreviewRequestFlushContext(preview, ctxt);
CGContextRelease(ctxt);
At least that works...

How to debug why Mac OS is not using Hardware H264 encoder

I'm trying to encode some video only stream using H264, and I'm willing to use the hardware encoder in order to compare both quality and resource consumption between hardware and CPU encoding. The thing is that I'm not being able to force the OS to use the hardware encoder.
This is the code I'm using to create the VTCompressionSession:
var status: OSStatus
let encoderSpecifications: CFDictionary? = [
kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder as String: true,
kVTVideoEncoderSpecification_RequireHardwareAcceleratedVideoEncoder as String: true,
kVTVideoEncoderSpecification_EncoderID as String: "com.apple.videotoolbox.videoencoder.24rgb" // Tried without this paramenter so the system can decide what encoder ID should be using but doesn't work anyway.
]
let pixelBufferOptions: CFDictionary? = [
kCVPixelBufferWidthKey as String: Int(width),
kCVPixelBufferHeightKey as String: Int(height),
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_24RGB) // Tried commenting this in case that there was a pixelformat constraint but didn't change anything
];
status = VTCompressionSessionCreate(kCFAllocatorDefault, width, height, CMVideoCodecType(kCMVideoCodecType_H264), encoderSpecifications, pixelBufferOptions, nil, { (outputCallbackRefCon: UnsafeMutablePointer<Void>, sourceFrameRefCon: UnsafeMutablePointer<Void>, status: OSStatus, infoFlags: VTEncodeInfoFlags, sampleBuffer: CMSampleBuffer?) -> Void in
...
}, unsafeBitCast(self, UnsafeMutablePointer<Void>.self), &compressionSession)
I opened the Console and this the only relevant message I'm getting when I try to create the session:
10/28/15 22:06:27.711 Dupla-Mac[87762]: <<<< VTVideoEncoderSelection >>>> VTSelectAndCreateVideoEncoderInstanceInternal: no video encoder found for 'avc1'
This is the status code I get when I use the EncoderID:
2015-10-28 22:17:13.480 Dupla-Mac[87895:5578917] Couldn't create compression session :( -12908
And this is the one I get when I don't use the EncoderID:
2015-10-28 22:18:16.695 Dupla-Mac[87996:5581914] Couldn't create compression session :( -12915
Both relate to the lack of availability of the resource, but couldn't find any difference. I've checked that the most known functionality that may use the hardware encoder are turned off, but I don't know how to check this for sure. AirPlay is off, QuickTime is off, there's not any app accessing the camera, and so.
TL;DR: is there any way to force or to know what's the strategy the OS is using to enable the Hardware Encoder, and eventually know why it is not available at any moment?
Thanks in advance!
I guess you've already resolved the problem but for others - the only HW-accelerated encoder available on macOS (10.8-10.12 for all macs 2012+) / iOS(8-10) is com.apple.videotoolbox.videoencoder.h264.gva and here is the full list: https://gist.github.com/vade/06ace2e33561a79cc240
get codec list
CFArrayRef encoder_list;
VTCopyVideoEncoderList(NULL, &encoder_list);
CFIndex size = CFArrayGetCount(encoder_list);
for(CFIndex i = 0; i < size; i++) {
CFDictionaryRef encoder_dict = CFArrayGetValueAtIndex(encoder_list, i);
CFStringRef type = CFDictionaryGetValue(encoder_dict, kVTVideoEncoderList_CodecType);
CFStringRef encoderID = CFDictionaryGetValue(encoder_dict, kVTVideoEncoderList_EncoderID);
CFStringRef codecName = CFDictionaryGetValue(encoder_dict, kVTVideoEncoderList_CodecName);
CFStringRef encoderName = CFDictionaryGetValue(encoder_dict, kVTVideoEncoderList_EncoderName);
CFStringRef display_name = CFDictionaryGetValue(encoder_dict, kVTVideoEncoderList_DisplayName);
NSLog(#" %# %# %# %# %#", type, encoderID, codecName, encoderName, display_name);
}

Why does CMSampleBufferGetImageBuffer return NULL

I have built some code to process video files on OSX, frame by frame. The following is an extract from the code which builds OK, opens the file, locates the video track (only track) and starts reading CMSampleBuffers without problem. However each CMSampleBufferRef I obtain returns NULL when I try to extract the pixel buffer frame. There's no indication in iOS documentation as to why I could expect a NULL return value or how I could expect to fix the issue. It happens with all the videos on which I've tested it, regardless of capture source or CODEC.
Any help greatly appreciated.
NSString *assetInPath = #"/Users/Dave/Movies/movie.mp4";
NSURL *assetInUrl = [NSURL fileURLWithPath:assetInPath];
AVAsset *assetIn = [AVAsset assetWithURL:assetInUrl];
NSError *error;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:assetIn error:&error];
AVAssetTrack *track = [assetIn.tracks objectAtIndex:0];
AVAssetReaderOutput *assetReaderOutput = [[AVAssetReaderTrackOutput alloc]
initWithTrack:track
outputSettings:nil];
[assetReader addOutput:assetReaderOutput];
// Start reading
[assetReader startReading];
CMSampleBufferRef sampleBuffer;
do {
sampleBuffer = [assetReaderOutput copyNextSampleBuffer];
/**
** At this point, sampleBuffer is non-null, has all appropriate attributes to indicate that
** it's a video frame, 320x240 or whatever and looks perfectly fine. But the next
** line always returns NULL without logging any obvious error message
**/
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if( pixelBuffer != NULL ) {
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
...
other processing removed here for clarity
}
} while( ... );
To be clear, I've stripped all error checking code but no problems were being indicated in that code. i.e. The AVAssetReader is reading, CMSampleBufferRef looks fine etc.
You haven't specified any outputSettings when creating your AVAssetReaderTrackOutput. I've run into your issue when specifying "nil" in order to receive the video track's original pixel format when calling copyNextSampleBuffer. In my app I wanted to ensure no conversion was happening when calling copyNextSampleBuffer for the sake of performance, if this isn't a big concern for you, specify a pixel format in the output settings.
The following are Apple's recommend pixel formats based on the hardware capabilities:
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
Because you haven't supplied any outputSettings you're forced to use the raw data contained within in the frame.
You have to get the block buffer from the sample buffer using CMSampleBufferGetDataBuffer(sampleBuffer), after you have that you need to get the actual location of the block buffer using
size_t blockBufferLength;
char *blockBufferPointer;
CMBlockBufferGetDataPointer(blockBuffer, 0, NULL, &blockBufferLength, &blockBufferPointer);
Look at *blockBufferPointer and decode the bytes using the frame header information for your required codec.
FWIW: Here is what official docs say for the return value of CMSampleBufferGetImageBuffer:
"Result is a CVImageBuffer of media data. The result will be NULL if the CMSampleBuffer does not contain a CVImageBuffer, or if the CMSampleBuffer contains a CMBlockBuffer, or if there is some other error."
Also note that the caller does not own the returned dataBuffer from CMSampleBufferGetImageBuffer, and must retain it explicitly if the caller needs to maintain a reference to it.
Hopefully this info helps.

Failing to inflate a png inside a plist with Windows using Cocos2D-X

I am running the particleWithFile() function of Coco2D-X, with a plist file created with Mac. The image data is embedded in the plist file, using the "textureImageData" key.
With Mac it works fine, but with Windows it fails, on CCAssert(isOK), see Coco2D-X code below (CCParticleSystem.cpp):
char *textureData = (char*)valueForKey("textureImageData", dictionary);
CCAssert(textureData, "");
int dataLen = strlen(textureData);
if(dataLen != 0)
{
// if it fails, try to get it from the base64-gzipped data
int decodeLen = base64Decode((unsigned char*)textureData, (unsigned int)dataLen, &buffer);
CCAssert( buffer != NULL, "CCParticleSystem: error decoding textureImageData");
CC_BREAK_IF(!buffer);
int deflatedLen = ZipUtils::ccInflateMemory(buffer, decodeLen, &deflated);
CCAssert( deflated != NULL, "CCParticleSystem: error ungzipping textureImageData");
CC_BREAK_IF(!deflated);
image = new CCImage();
bool isOK = image->initWithImageData(deflated, deflatedLen);
CCAssert(isOK, "CCParticleSystem: error init image with Data");
CC_BREAK_IF(!isOK);
m_pTexture = CCTextureCache::sharedTextureCache()->addUIImage(image, fullpath.c_str());
}
It seems the the decoding passes successfully, and the problem is inside inflate() function of zlib, failing to unzip the png file.
Any suggestions?
OK, I found the problem.
Apparently my resource wasn't a png file but a tiff file. So Inflate() returned a correct buffer (of a tiff file - looked too long for me, so I suspected it was erroneous), but initWithImageData() failed to create an image, since tiff are not supported by Cocos2d-x on windows.
I encounter the problem because I set the CC_USE_TIFF to 0.
After I recovery the CC_USE_TIFF to 1,the problem resolved.

Resources