replaceFormatDescription:withFormatDescription: - macos

I am having trouble with customized CMFormatDescription in AVMutableMovieTrack.
It seems to work as intended though, the modification seems to be volatile and I am unable to write out modified formatDescription into movie header. I guess this is a bug of movieHeaderWithFileType:error:.
Is there any way to make movie header with modified format description?
In detail:
From macOS 10.13 AVMutableMovieTrack supports format description replacement in AVMutableMovieTrack.
- (void)replaceFormatDescription:(CMFormatDescriptionRef)formatDescription
withFormatDescription:(CMFormatDescriptionRef)newFormatDescription;
When I do AVMovie's movieHeaderWithFileType:error: or writeMovieHeaderToURL:fileType:options:error:, resulted movie header contains original unchaged video media format description. So it is unable to save.
- (NSData *)movieHeaderWithFileType:(AVFileType)fileType
error:(NSError * _Nullable *)outError;
- (BOOL)writeMovieHeaderToURL:(NSURL *)URL
fileType:(AVFileType)fileType
options:(AVMovieWritingOptions)options
error:(NSError * _Nullable *)outError;
Sample source:
https://github.com/MyCometG3/cutter2/blob/master/cutter2/MovieMutator.swift
var newFormat : CMVideoFormatDescription? = nil
let codecType = CMFormatDescriptionGetMediaSubType(format) as CMVideoCodecType
let dimensions = CMVideoFormatDescriptionGetDimensions(format)
let result = CMVideoFormatDescriptionCreate(kCFAllocatorDefault,
codecType,
dimensions.width,
dimensions.height,
dict,
&newFormat)
if result == noErr, let newFormat = newFormat {
track.replaceFormatDescription(format, with: newFormat)
count += 1
} else {
https://github.com/MyCometG3/cutter2/blob/master/cutter2/MovieMutatorBase.swift
let movie : AVMovie = internalMovie.mutableCopy() as! AVMutableMovie
let data = try? movie.makeMovieHeader(fileType: AVFileType.mov)
return data

I have find out the reason why the modification is lost.
According to CMFormatDescription.h, I have to remove two type of extensions when copying extensions from original format description.
CM_EXPORT const CFStringRef kCMFormatDescriptionExtension_VerbatimSampleDescription /
__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_4_0);
#discussion This extension is used to ensure that roundtrips from sample descriptions
to CMFormatDescriptions back to sample descriptions preserve the exact original
sample descriptions.
IMPORTANT: If you make a modified clone of a CMFormatDescription, you must
delete this extension from the clone, or your modifications could be lost.
CM_EXPORT const CFStringRef kCMFormatDescriptionExtension_VerbatimISOSampleEntry /
__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_4_0);
#discussion This extension is used to ensure that roundtrips from ISO Sample Entry (ie. AudioSampleEntry or VisualSampleEntry)
to CMFormatDescriptions back to ISO Sample Entry preserve the exact original
sample descriptions.
IMPORTANT: If you make a modified clone of a CMFormatDescription, you must
delete this extension from the clone, or your modifications could be lost.
So the code snippet will be like:
let formats = track.formatDescriptions as! [CMFormatDescription]
for format in formats {
guard let cfDict = CMFormatDescriptionGetExtensions(format) else { continue }
let dict : NSMutableDictionary = NSMutableDictionary(dictionary: cfDict)
dict[kCMFormatDescriptionExtension_VerbatimSampleDescription] = nil
dict[kCMFormatDescriptionExtension_VerbatimISOSampleEntry] = nil
:

Related

Cocoa Converting RTFD to ASCII and back

Is it possible to convert an NSattributedString with attachments (RTFD not RTF) to ASCII, edit the stream, and convert it back? So far I am able to convert an RTFD to a String stream. But turning it back into an NSData object does not work. Here's the code I'm using in a playground.
import Cocoa
func stream(attr: NSAttributedString) -> String? {
if let d = attr.rtfd(from: NSMakeRange(0, attr.length), documentAttributes: [NSDocumentTypeDocumentAttribute: NSRTFDTextDocumentType]) {
if let str = String(data: d, encoding: .ascii) { return str }
else {
print("Unable to produce RTFD string")
return nil
}
}
print("Unable to produce RTFD data stream")
return nil
}
if let im = NSImage(named: "image.png") {
let a = NSTextAttachment()
a.image = im
let s = NSAttributedString(attachment: a)
if let str = stream(attr: s) {
print("\(str)\n") //prints a string, which contains RTF code combined with NSTextAttachment string representation
if let data = str.data(using: .ascii) { //this is where things stop working
if let newRTF = NSAttributedString(rtfd: data as Data, documentAttributes: nil) {
print(newRTF)
}
else { print("rtfd was not created") }
}
else { print("could not make data") }
}
}
What am I missing? Or is my entire concept wrong here? I am doing this to get around a limitation of the way OS X handles images attached in RTF documents.
Edit:
The limitation I am trying to address is to set the size of an image in an RTF stream. The text handling system requires that we use NSTextAttachment. Whenever an image is pasted from that, it automatically sizes the image to whatever the pixel height and width are. Unfortunately there is no way to control this property. I have tried here and also using all the techniques here.
As far as the ASCII stream, I'm not trying to edit the image attachment itself. When the stream is printed, the actual RTF code is visible and editable. This works and would be a good workaround for this limitation. All I need is to edit the RTF code and change the \width and \height properties that Apple uses.
After your edit I can see what you are trying to do, interesting idea, but it won't work - at least not easily.
Take a look at the value of d, it is not an ASCII string stored as a value of type Data (or NSData). It is a serialised representation of multiple items; the RTF stream (text), the image data (binary). If you convert this to an ASCII string and back again it is not going to work, you can't represent arbitrary binary data as ASCII unless you encode it (e.g. something like base 64 encoding).
Now you could attempt what you are trying a slightly different way, skip the conversion to ASCII and edit the Data value directly. That is certainly possible, but as you are editing a format you don't know (the serialised representation) you would have to be careful... And even if you succeed in editing the representation there is no guarantee that converting back to an NSAttributedString with an NSTextAttachment will preserve your edits.
I suggest you tackle this another way. You have an NSAttributedString and you don't like the RTF produced after you write this to a file. So edit the RTF after it is written, e.g. open up the RTFD package, open the contained RTF file (TXT.rtf), edit it, write it back.
HTH

How to debug why Mac OS is not using Hardware H264 encoder

I'm trying to encode some video only stream using H264, and I'm willing to use the hardware encoder in order to compare both quality and resource consumption between hardware and CPU encoding. The thing is that I'm not being able to force the OS to use the hardware encoder.
This is the code I'm using to create the VTCompressionSession:
var status: OSStatus
let encoderSpecifications: CFDictionary? = [
kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder as String: true,
kVTVideoEncoderSpecification_RequireHardwareAcceleratedVideoEncoder as String: true,
kVTVideoEncoderSpecification_EncoderID as String: "com.apple.videotoolbox.videoencoder.24rgb" // Tried without this paramenter so the system can decide what encoder ID should be using but doesn't work anyway.
]
let pixelBufferOptions: CFDictionary? = [
kCVPixelBufferWidthKey as String: Int(width),
kCVPixelBufferHeightKey as String: Int(height),
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_24RGB) // Tried commenting this in case that there was a pixelformat constraint but didn't change anything
];
status = VTCompressionSessionCreate(kCFAllocatorDefault, width, height, CMVideoCodecType(kCMVideoCodecType_H264), encoderSpecifications, pixelBufferOptions, nil, { (outputCallbackRefCon: UnsafeMutablePointer<Void>, sourceFrameRefCon: UnsafeMutablePointer<Void>, status: OSStatus, infoFlags: VTEncodeInfoFlags, sampleBuffer: CMSampleBuffer?) -> Void in
...
}, unsafeBitCast(self, UnsafeMutablePointer<Void>.self), &compressionSession)
I opened the Console and this the only relevant message I'm getting when I try to create the session:
10/28/15 22:06:27.711 Dupla-Mac[87762]: <<<< VTVideoEncoderSelection >>>> VTSelectAndCreateVideoEncoderInstanceInternal: no video encoder found for 'avc1'
This is the status code I get when I use the EncoderID:
2015-10-28 22:17:13.480 Dupla-Mac[87895:5578917] Couldn't create compression session :( -12908
And this is the one I get when I don't use the EncoderID:
2015-10-28 22:18:16.695 Dupla-Mac[87996:5581914] Couldn't create compression session :( -12915
Both relate to the lack of availability of the resource, but couldn't find any difference. I've checked that the most known functionality that may use the hardware encoder are turned off, but I don't know how to check this for sure. AirPlay is off, QuickTime is off, there's not any app accessing the camera, and so.
TL;DR: is there any way to force or to know what's the strategy the OS is using to enable the Hardware Encoder, and eventually know why it is not available at any moment?
Thanks in advance!
I guess you've already resolved the problem but for others - the only HW-accelerated encoder available on macOS (10.8-10.12 for all macs 2012+) / iOS(8-10) is com.apple.videotoolbox.videoencoder.h264.gva and here is the full list: https://gist.github.com/vade/06ace2e33561a79cc240
get codec list
CFArrayRef encoder_list;
VTCopyVideoEncoderList(NULL, &encoder_list);
CFIndex size = CFArrayGetCount(encoder_list);
for(CFIndex i = 0; i < size; i++) {
CFDictionaryRef encoder_dict = CFArrayGetValueAtIndex(encoder_list, i);
CFStringRef type = CFDictionaryGetValue(encoder_dict, kVTVideoEncoderList_CodecType);
CFStringRef encoderID = CFDictionaryGetValue(encoder_dict, kVTVideoEncoderList_EncoderID);
CFStringRef codecName = CFDictionaryGetValue(encoder_dict, kVTVideoEncoderList_CodecName);
CFStringRef encoderName = CFDictionaryGetValue(encoder_dict, kVTVideoEncoderList_EncoderName);
CFStringRef display_name = CFDictionaryGetValue(encoder_dict, kVTVideoEncoderList_DisplayName);
NSLog(#" %# %# %# %# %#", type, encoderID, codecName, encoderName, display_name);
}

How to convert PFObject array in image view?

Sorry guys I want to try this again just reword it so its understandable.
ImageFiles is an array of PFObjects, about 10 images
let randomNumber = imageFiles[Int(arc4random_uniform(UInt32(imageFiles.count)))]
println(randomNumber)
and the println gives me a random image file from the array.
How do I put that into the image view? Meaning i want a random element of the array to be viewable in the uiimage view.
let image = UIImage(data: randomNumber as AnyObject as! NSData)
closest I've got... no syntax error only runtime
Could not cast value of type 'PFObject' (0x105639070) to 'NSData' (0x106c97a48).
An actual image stored in Parse is not of the format PFObject, it is of the type PFFile. Normally a reference to a PFFile is stored within a PFObject.
I can imagine your code/pseudocode will end up looking like the following:
let myRandomImageMetaData = imageFiles[Int(arc4random_uniform(UInt32(imageFiles.count)))]
let imageFileReference: PFFile = myRandomImageMetaData["imgLink"]
The actual code will depend on your database structure which I've already asked you for here, but you never provided.
Now that you have the file reference, then you actually have to download the image data itself. You can do it like so:
imageFileReference.getDataInBackgroundWithBlock {
(imageData: NSData?, error: NSError?) -> Void in
if error == nil {
if let imageData = imageData {
let image = UIImage(data:imageData)
}
}
}
You can read more about storing and retrieving images stored as PFFile in Parse here:
https://parse.com/docs/ios/guide#files-images
Now, your next problem is that you have not known all of this while uploading the images in the first place. So there is a high risk that you've uploaded the images to Parse badly in the first place and that the images doesn't exist there at all.

GLKView won't change pixel format

There is an issue of GLKView, I'm stuck here a lot. First, I create EAGLContext context and make it current:
EAGLContext* pOpenGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
if(!pOpenGLContext)
return nil;
if(![EAGLContext setCurrentContext:pOpenGLContext])
return nil;
Runs ok (I need version 3 so it sutes me)! Then I create GLKView, attached to previously created context:
GLKView* pOpenGLView = [[GLKView alloc] initWithFrame:Frame context:pOpenGLContext];
It's ok. But this code don't change anything at all :(
[pOpenGLView setDrawableColorFormat:GLKViewDrawableColorFormatRGBA8888];
[pOpenGLView setDrawableDepthFormat:GLKViewDrawableDepthFormat24];
[pOpenGLView setDrawableStencilFormat:GLKViewDrawableStencilFormatNone];
[pOpenGLView setDrawableMultisample:GLKViewDrawableMultisampleNone];
Then I do some final stuff:
pOpenGLView.delegate = self;
[pMainWindow addSubview:pOpenGLView];
...
However, after using GLKViewDrawableStencilFormatNone, I asks OpenGL for a depth and stencil formats... I get:
glGetIntegerv(GL_DEPTH_BITS, &OpenGLDepthBits); // = 32 (I need 24)
glGetIntegerv(GL_STENCIL_BITS, &OpenGLStencilBits); // = 8 (I need 0)
I need to turn stencil buffer off! I need to set 24-bit format depth buffer.
I have try to do like this also:
pOpenGLView.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
pOpenGLView.drawableDepthFormat = GLKViewDrawableDepthFormat24;
pOpenGLView.drawableStencilFormat = GLKViewDrawableStencilFormatNone;
How can I get it? What is wrong here? Thank you.

Why does CMSampleBufferGetImageBuffer return NULL

I have built some code to process video files on OSX, frame by frame. The following is an extract from the code which builds OK, opens the file, locates the video track (only track) and starts reading CMSampleBuffers without problem. However each CMSampleBufferRef I obtain returns NULL when I try to extract the pixel buffer frame. There's no indication in iOS documentation as to why I could expect a NULL return value or how I could expect to fix the issue. It happens with all the videos on which I've tested it, regardless of capture source or CODEC.
Any help greatly appreciated.
NSString *assetInPath = #"/Users/Dave/Movies/movie.mp4";
NSURL *assetInUrl = [NSURL fileURLWithPath:assetInPath];
AVAsset *assetIn = [AVAsset assetWithURL:assetInUrl];
NSError *error;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:assetIn error:&error];
AVAssetTrack *track = [assetIn.tracks objectAtIndex:0];
AVAssetReaderOutput *assetReaderOutput = [[AVAssetReaderTrackOutput alloc]
initWithTrack:track
outputSettings:nil];
[assetReader addOutput:assetReaderOutput];
// Start reading
[assetReader startReading];
CMSampleBufferRef sampleBuffer;
do {
sampleBuffer = [assetReaderOutput copyNextSampleBuffer];
/**
** At this point, sampleBuffer is non-null, has all appropriate attributes to indicate that
** it's a video frame, 320x240 or whatever and looks perfectly fine. But the next
** line always returns NULL without logging any obvious error message
**/
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if( pixelBuffer != NULL ) {
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
...
other processing removed here for clarity
}
} while( ... );
To be clear, I've stripped all error checking code but no problems were being indicated in that code. i.e. The AVAssetReader is reading, CMSampleBufferRef looks fine etc.
You haven't specified any outputSettings when creating your AVAssetReaderTrackOutput. I've run into your issue when specifying "nil" in order to receive the video track's original pixel format when calling copyNextSampleBuffer. In my app I wanted to ensure no conversion was happening when calling copyNextSampleBuffer for the sake of performance, if this isn't a big concern for you, specify a pixel format in the output settings.
The following are Apple's recommend pixel formats based on the hardware capabilities:
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
Because you haven't supplied any outputSettings you're forced to use the raw data contained within in the frame.
You have to get the block buffer from the sample buffer using CMSampleBufferGetDataBuffer(sampleBuffer), after you have that you need to get the actual location of the block buffer using
size_t blockBufferLength;
char *blockBufferPointer;
CMBlockBufferGetDataPointer(blockBuffer, 0, NULL, &blockBufferLength, &blockBufferPointer);
Look at *blockBufferPointer and decode the bytes using the frame header information for your required codec.
FWIW: Here is what official docs say for the return value of CMSampleBufferGetImageBuffer:
"Result is a CVImageBuffer of media data. The result will be NULL if the CMSampleBuffer does not contain a CVImageBuffer, or if the CMSampleBuffer contains a CMBlockBuffer, or if there is some other error."
Also note that the caller does not own the returned dataBuffer from CMSampleBufferGetImageBuffer, and must retain it explicitly if the caller needs to maintain a reference to it.
Hopefully this info helps.

Resources