Can VideoToolbox decode H264 Annex B natively? Error Code -8969 BadData - macos

My goal is to mirror the screen of an iDevice to OSX, as lag-free as possible.
To my knowledge there are 2 ways to this:
Airplay Mirroring (e.g. Reflector)
CoreMediaIO via Lightning (e.g. Quicktime Recording)
I have chosen to pursue the second method, because (to my knowledge) connected iDevices can be recognized as DAL devices automatically after a one-time setup.
The main resource on how to do this is this blog: https://nadavrub.wordpress.com/2015/07/06/macos-media-capture-using-coremediaio/
That blog goes very deep into how to use CoreMediaIO, however it seems like you can work with AVFoundation once you have recognized the connected iDevice as an AVCaptureDevice.
This question: How to mirror iOS screen via USB? has posted a solution on how to grab each frame of the H264 (Annex B) muxxed datastream supplied by the iDevice.
However, my problem is that VideoToolbox will not correctly decode (Error Code -8969, BadData), even though there shouldn't be any difference in the code.
vtDecompressionDuctDecodeSingleFrame signalled err=-8969 (err) (VTVideoDecoderDecodeFrame returned error) at /SourceCache/CoreMedia_frameworks/CoreMedia-1562.240/Sources/VideoToolbox/VTDecompressionSession.c line 3241
Complete Code:
#import "ViewController.h"
#import CoreMediaIO;
#import AVFoundation;
#import AppKit;
#implementation ViewController
AVCaptureSession *session;
AVCaptureDeviceInput *newVideoDeviceInput;
AVCaptureVideoDataOutput *videoDataOutput;
- (void)viewDidLoad {
[super viewDidLoad];
}
- (instancetype)initWithCoder:(NSCoder *)coder
{
self = [super initWithCoder:coder];
if (self) {
// Allow iOS Devices Discovery
CMIOObjectPropertyAddress prop =
{ kCMIOHardwarePropertyAllowScreenCaptureDevices,
kCMIOObjectPropertyScopeGlobal,
kCMIOObjectPropertyElementMaster };
UInt32 allow = 1;
CMIOObjectSetPropertyData( kCMIOObjectSystemObject,
&prop, 0, NULL,
sizeof(allow), &allow );
// Get devices
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeMuxed];
BOOL deviceAttahced = false;
for (int i = 0; i < [devices count]; i++) {
AVCaptureDevice *device = devices[i];
if ([[device uniqueID] isEqualToString:#"b48defcadf92f300baf5821923f7b3e2e9fb3947"]) {
deviceAttahced = true;
[self startSession:device];
break;
}
}
}
return self;
}
- (void) deviceConnected:(AVCaptureDevice *)device {
if ([[device uniqueID] isEqualToString:#"b48defcadf92f300baf5821923f7b3e2e9fb3947"]) {
[self startSession:device];
}
}
- (void) startSession:(AVCaptureDevice *)device {
// Init capturing session
session = [[AVCaptureSession alloc] init];
// Star session configuration
[session beginConfiguration];
// Add session input
NSError *error;
newVideoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (newVideoDeviceInput == nil) {
dispatch_async(dispatch_get_main_queue(), ^(void) {
NSLog(#"%#", error);
});
} else {
[session addInput:newVideoDeviceInput];
}
// Add session output
videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
videoDataOutput.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey: (id)kCVPixelBufferPixelFormatTypeKey];
dispatch_queue_t videoQueue = dispatch_queue_create("videoQueue", NULL);
[videoDataOutput setSampleBufferDelegate:self queue:videoQueue];
[session addOutput:videoDataOutput];
// Finish session configuration
[session commitConfiguration];
// Start the session
[session startRunning];
}
#pragma mark - AVCaptureAudioDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
//NSImage *resultNSImage = [self imageFromSampleBuffer:sampleBuffer];
//self.imageView.image = [self nsImageFromSampleBuffer:sampleBuffer];
self.imageView.image = [[NSImage alloc] initWithData:imageToBuffer(sampleBuffer)];
}
NSData* imageToBuffer( CMSampleBufferRef source) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);
NSData *data = [NSData dataWithBytes:src_buff length:bytesPerRow * height];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return data;
}

No, you must remove annex b start codes and replace them with size values. Same format as MP4

Related

AVFoundation image captured is dark

On osx i use AVFoundation to capture image from a USB camera, all work fine, but the image I get is darker compared to live video.
Device capture configuration
-(BOOL)prepareCapture{
captureSession = [[AVCaptureSession alloc] init];
NSError *error;
imageOutput=[[AVCaptureStillImageOutput alloc] init];
NSNumber * pixelFormat = [NSNumber numberWithInt:k32BGRAPixelFormat];
[imageOutput setOutputSettings:[NSDictionary dictionaryWithObject:pixelFormat forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
videoOutput=[[AVCaptureMovieFileOutput alloc] init];
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:MyVideoDevice error:&error];
if (videoInput) {
[captureSession beginConfiguration];
[captureSession addInput:videoInput];
[captureSession setSessionPreset:AVCaptureSessionPresetHigh];
//[captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
[captureSession addOutput:imageOutput];
[captureSession addOutput:videoOutput];
[captureSession commitConfiguration];
}
else {
// Handle the failure.
return NO;
}
return YES;
}
Add view for live preview
-(void)settingPreview:(NSView*)View{
// Attach preview to session
previewView = View;
CALayer *previewViewLayer = [previewView layer];
[previewViewLayer setBackgroundColor:CGColorGetConstantColor(kCGColorBlack)];
AVCaptureVideoPreviewLayer *newPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession];
[newPreviewLayer setFrame:[previewViewLayer bounds]];
[newPreviewLayer setAutoresizingMask:kCALayerWidthSizable | kCALayerHeightSizable];
[previewViewLayer addSublayer:newPreviewLayer];
//[self setPreviewLayer:newPreviewLayer];
[captureSession startRunning];
}
Code to capture the image
-(void)captureImage{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in imageOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
[imageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:
^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
CFDictionaryRef exifAttachments =
CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
// Do something with the attachments.
}
// Continue as appropriate.
//IMG is a global NSImage
IMG = [self imageFromSampleBuffer:imageSampleBuffer];
[[self delegate] imageReady:IMG];
}];
}
Create a NSImage from sample buffer data, i think the problem is here
- (NSImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
//UIImage *image = [UIImage imageWithCGImage:quartzImage];
NSImage * image = [[NSImage alloc] initWithCGImage:quartzImage size:NSZeroSize];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
Solution found
The problem was in imageFromSampleBuffer
I used this code and the picture is perfect
// Continue as appropriate.
//IMG = [self imageFromSampleBuffer:imageSampleBuffer];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);
if (imageBuffer) {
CVBufferRetain(imageBuffer);
NSCIImageRep* imageRep = [NSCIImageRep imageRepWithCIImage: [CIImage imageWithCVImageBuffer: imageBuffer]];
IMG = [[NSImage alloc] initWithSize: [imageRep size]];
[IMG addRepresentation: imageRep];
CVBufferRelease(imageBuffer);
}
Code found in this answer
In my case, you still need to call captureStillImageAsynchronouslyFromConnection: multiple times to force the built-in camera to expose properly.
int primeCount = 8; //YMMV
for (int i = 0; i < primeCount; i++) {
[imageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {}];
}
[imageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);
if (imageBuffer) {
CVBufferRetain(imageBuffer);
NSCIImageRep* imageRep = [NSCIImageRep imageRepWithCIImage: [CIImage imageWithCVImageBuffer: imageBuffer]];
IMG = [[NSImage alloc] initWithSize: [imageRep size]];
[IMG addRepresentation: imageRep];
}
}];

Metal Framework on macOS

I am creating a simple Texture display that essentially renders the Video frames in BGRA format through Metal display. I follow the same steps as told in Metal WWDC session. But I have problems in creating the render encoder. My code is
id <MTLDevice> device = MTLCreateSystemDefaultDevice();
id<MTLCommandQueue> commandQueue = [device newCommandQueue];
id<MTLLibrary> library = [device newDefaultLibrary];
// Create Render Command Descriptor.
MTLRenderPipelineDescriptor* renderPipelineDesc = [MTLRenderPipelineDescriptor new];
renderPipelineDesc.colorAttachments[0].pixelFormat = MTLPixelFormatBGRA8Unorm;
renderPipelineDesc.vertexFunction = [library newFunctionWithName:#"basic_vertex"];
renderPipelineDesc.fragmentFunction = [library newFunctionWithName:#"basic_fragment"];
NSError* error = nil;
id<MTLRenderPipelineState> renderPipelineState = [device newRenderPipelineStateWithDescriptor:renderPipelineDesc
error:&error];
id<MTLCommandBuffer> commandBuffer = [commandQueue commandBuffer];
MTLRenderPassDescriptor* renderPassDesc = [MTLRenderPassDescriptor renderPassDescriptor];
id<CAMetalDrawable> drawable = [_metalLayer nextDrawable];
MTLRenderPassColorAttachmentDescriptor* colorAttachmentDesc = [MTLRenderPassColorAttachmentDescriptor new];
colorAttachmentDesc.texture = drawable.texture;
colorAttachmentDesc.loadAction = MTLLoadActionLoad;
colorAttachmentDesc.storeAction = MTLStoreActionStore;
colorAttachmentDesc.clearColor = MTLClearColorMake(0, 0, 0, 1);
[renderPassDesc.colorAttachments setObject:colorAttachmentDesc atIndexedSubscript:0];
[inTexture replaceRegion:region
mipmapLevel:0
withBytes:imageBytes
bytesPerRow:CVPixelBufferGetBytesPerRow(_image)];
id<MTLRenderCommandEncoder> renderCmdEncoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPassDesc];
[renderCmdEncoder setRenderPipelineState:_renderPipelineState];
[renderCmdEncoder endEncoding];
This code crashes in the line saying "No Render Targets Found"
id renderCmdEncoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPassDesc];
I am not able to figure out where and how to set the render target.
This will work perfectly; if you need help implementing it, let me know:
#import UIKit;
#import AVFoundation;
#import CoreMedia;
#import <MetalKit/MetalKit.h>
#import <Metal/Metal.h>
#import <MetalPerformanceShaders/MetalPerformanceShaders.h>
#interface ViewController : UIViewController <MTKViewDelegate, AVCaptureVideoDataOutputSampleBufferDelegate> {
NSString *_displayName;
NSString *serviceType;
}
#property (retain, nonatomic) SessionContainer *session;
#property (retain, nonatomic) AVCaptureSession *avSession;
#end;
#import "ViewController.h"
#interface ViewController () {
MTKView *_metalView;
id<MTLDevice> _device;
id<MTLCommandQueue> _commandQueue;
id<MTLTexture> _texture;
CVMetalTextureCacheRef _textureCache;
}
#property (strong, nonatomic) AVCaptureDevice *videoDevice;
#property (nonatomic) dispatch_queue_t sessionQueue;
#end
#implementation ViewController
- (void)viewDidLoad {
NSLog(#"%s", __PRETTY_FUNCTION__);
[super viewDidLoad];
_device = MTLCreateSystemDefaultDevice();
_metalView = [[MTKView alloc] initWithFrame:self.view.bounds];
[_metalView setContentMode:UIViewContentModeScaleAspectFit];
_metalView.device = _device;
_metalView.delegate = self;
_metalView.clearColor = MTLClearColorMake(1, 1, 1, 1);
_metalView.colorPixelFormat = MTLPixelFormatBGRA8Unorm;
_metalView.framebufferOnly = NO;
_metalView.autoResizeDrawable = NO;
CVMetalTextureCacheCreate(NULL, NULL, _device, NULL, &_textureCache);
[self.view addSubview:_metalView];
self.sessionQueue = dispatch_queue_create( "session queue", DISPATCH_QUEUE_SERIAL );
if ([self setupCamera]) {
[_avSession startRunning];
}
}
- (BOOL)setupCamera {
NSLog(#"%s", __PRETTY_FUNCTION__);
#try {
NSError * error;
_avSession = [[AVCaptureSession alloc] init];
[_avSession beginConfiguration];
[_avSession setSessionPreset:AVCaptureSessionPreset640x480];
// get list of devices; connect to front-facing camera
self.videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (self.videoDevice == nil) return FALSE;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:self.videoDevice error:&error];
[_avSession addInput:input];
dispatch_queue_t sampleBufferQueue = dispatch_queue_create("CameraMulticaster", DISPATCH_QUEUE_SERIAL);
AVCaptureVideoDataOutput * dataOutput = [[AVCaptureVideoDataOutput alloc] init];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES];
[dataOutput setVideoSettings:#{(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)}];
[dataOutput setSampleBufferDelegate:self queue:sampleBufferQueue];
[_avSession addOutput:dataOutput];
[_avSession commitConfiguration];
} #catch (NSException *exception) {
NSLog(#"%s - %#", __PRETTY_FUNCTION__, exception.description);
return FALSE;
} #finally {
return TRUE;
}
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
{
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
CVMetalTextureRef texture = NULL;
CVReturn status = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _textureCache, pixelBuffer, NULL, MTLPixelFormatBGRA8Unorm, width, height, 0, &texture);
if(status == kCVReturnSuccess)
{
_metalView.drawableSize = CGSizeMake(width, height);
_texture = CVMetalTextureGetTexture(texture);
_commandQueue = [_device newCommandQueue];
CFRelease(texture);
}
}
}
- (void)drawInMTKView:(MTKView *)view {
// creating command encoder
if (_texture) {
id<MTLCommandBuffer> commandBuffer = [_commandQueue commandBuffer];
id<MTLTexture> drawingTexture = view.currentDrawable.texture;
// set up and encode the filter
MPSImageGaussianBlur *filter = [[MPSImageGaussianBlur alloc] initWithDevice:_device sigma:5];
[filter encodeToCommandBuffer:commandBuffer sourceTexture:_texture destinationTexture:drawingTexture];
// committing the drawing
[commandBuffer presentDrawable:view.currentDrawable];
[commandBuffer commit];
_texture = nil;
}
}
- (void)mtkView:(MTKView *)view drawableSizeWillChange:(CGSize)size {
}
#end
you should try one of following points
1.Instead of creating new render pass descriptor,use current render pass descriptor object from MTKView object.this render pass descriptor already will be configured.you need not set anything.try the sample code given below-
if let currentPassDesc = view.currentRenderPassDescriptor,
let currentDrawable = view.currentDrawable
{
let renderCommandEncoder =
commandBuffer.makeRenderCommandEncoder(descriptor: currentPassDesc)
renderCommandEncoder.setRenderPipelineState(renderPipeline)
//set vertex buffers and call draw apis
.......
.......
commandBuffer.present(currentDrawable)
}
2.you are creating a new render pass descriptor and then setting its color attachment by the texture of drawable object so instead of doing this you should create a new texture object and then set usage of this texture as render target.then you will get content rendered in your new texture but it will be not displayed on screen so to get displayed the content of your textue you have to copy the content of your texture in drawable texture and then present drawable.
below is the code of making render target -
renderPassDescriptor.colorAttachments[0].clearColor =
MTLClearColor(red:
0.0,green: 0.0,blue: 0.0,alpha: 1.0)
renderPassDescriptor.colorAttachments[0].loadAction = .clear
renderPassDescriptor.colorAttachments[0].storeAction = .store
renderPassDescriptor.depthAttachment.clearDepth = 1.0
renderPassDescriptor.depthAttachment.loadAction = .clear
renderPassDescriptor.depthAttachment.storeAction = .dontCare
let view = self.view as!MTKView
let textDesc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat:
.bgra8Unorm, width: Int(view.frame.width),
height: Int(view.frame.height), mipmapped: false)
textDesc.depth = 1
//see below line
textDesc.usage =
[MTLTextureUsage.renderTarget,MTLTextureUsage.shaderRead]
textDesc.storageMode = .private
mainPassFrameBuffer = device.makeTexture(descriptor: textDesc)
renderPassDescriptor.colorAttachments[0].texture = mainPassFrameBuffer

SampleBufferDelegate is not Working

For some odd reason AVCaptureVideoDataOutputSampleBufferDelegate isn't triggering. I've added the delegate and everything, i'm not sure why it isn't being ran in my code. Can anybody help me figure out why?
Delegates in my .h
#class AVPlayer;
#class AVPlayerClass;
#interface Camera : UIViewController <UIImagePickerControllerDelegate, UINavigationControllerDelegate, AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureFileOutputRecordingDelegate> {
.m code (initializeCamera is being called in ViewDidLoad)
-(void)initializeCamera {
Session = [[AVCaptureSession alloc]init];
[Session setSessionPreset:AVCaptureSessionPresetPhoto];
AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
NSError *error = nil;
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error];
[Session addInput:audioInput];
// Preview Layer***************
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:Session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CALayer *rootLayer = [[self view] layer];
[rootLayer setMasksToBounds:YES];
CGRect frame = self.CameraView.frame;
[previewLayer setFrame:frame];
[rootLayer insertSublayer:previewLayer atIndex:0];
[Session beginConfiguration];
//Remove existing input
[Session removeInput:newVideoInput];
newCamera = [self cameraWithPosition:AVCaptureDevicePositionBack];
// FrontCamera = NO;
[Session setSessionPreset:AVCaptureSessionPresetHigh];
if ([Session canSetSessionPreset:AVCaptureSessionPreset1920x1080])
//Check size based configs are supported before setting them
[Session setSessionPreset:AVCaptureSessionPreset1920x1080];
//Add input to session
NSError *err = nil;
newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:newCamera error:&err];
if(!newVideoInput || err)
{
NSLog(#"Error creating capture device input: %#", err.localizedDescription);
}
else if ([Session canAddInput:newVideoInput])
{
[Session addInput:newVideoInput];
}
[Session commitConfiguration];
stillImageOutput = [[AVCaptureStillImageOutput alloc]init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[Session addOutput:stillImageOutput];
MovieFileOutput = [[AVCaptureMovieFileOutput alloc]init];
Float64 TotalSeconds = 10;
int32_t preferredTimeScale = 60;
CMTime maxDuration = CMTimeMakeWithSeconds(TotalSeconds, preferredTimeScale);
MovieFileOutput.maxRecordedDuration = maxDuration;
MovieFileOutput.minFreeDiskSpaceLimit = 1024 * 1024;
if ([Session canAddOutput:MovieFileOutput])
[Session addOutput:MovieFileOutput];
// Create a VideoDataOutput and add it to the session
// AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
//
// [Session addOutput:output];
//
// // Configure your output.
//
// dispatch_queue_t queue = dispatch_get_main_queue();
//
// [output setSampleBufferDelegate:self queue:queue];
//
// // dispatch_release(queue);
//
// // Specify the pixel format
//
// output.videoSettings = [NSDictionary dictionaryWithObject:
//
// [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
//
// forKey:(id)kCVPixelBufferPixelFormatTypeKey];
//
//
//
//
//
// AVCaptureVideoDataOutput *dataOutput = [[AVCaptureVideoDataOutput alloc] init];
//
// [dataOutput setAlwaysDiscardsLateVideoFrames:YES];
// [dataOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
// forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
// [dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
//
// if ([Session canAddOutput:dataOutput])
// [Session addOutput:dataOutput];
// sessionに追加
// [self setupVideoOutput];
[Session setSessionPreset:AVCaptureSessionPresetHigh];
if ([Session canSetSessionPreset:AVCaptureSessionPreset1920x1080])
//Check size based configs are supported before setting them
[Session setSessionPreset:AVCaptureSessionPreset1920x1080];
[Session startRunning];
}
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef )sampleBuffer fromConnection:(AVCaptureConnection *)connections {
NSLog(#"Buff");
pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
VideoBuffer = pixelBuffer;
}
-(void)captureOutput:(AVCaptureOutput *)captureOutput didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
NSLog(#"The drop");
}
My code isn't triggering AVCaptureVideoDataOutputSampleBufferDelegate because I am using AVCaptureMovieFileOutput instead of AVCaptureVideoDataOutput. AVCaptureMovieFileOutput apparently does not use sample buffers. As soon as I now how to set up AVCaptureVideoDataOutput correctly to use sample buffers I will post my code. Hope this helps somebody.

Capturing blank stills from a AVCaptureScreenInput?

I'm working on sampling the screen using AVCaptureScreenInput and outputting it using a AVCaptureVideoDataOutput, and it's not working. The images it does output are blank, but it appears like I'm doing everything right according to all the documentation I've read.
I've made sure I make the AVCaptureVideoDataOutput to something that could be read by a CGImage (kCVPixelFormatType_32BGRA). When I run this same code and have it output to a AVCaptureMovieFileOutput, the movie renders fine and everything looks good - but what I really want is a series of images.
#import "ScreenRecorder.h"
#import <QuartzCore/QuartzCore.h>
#interface ScreenRecorder() <AVCaptureFileOutputRecordingDelegate, AVCaptureVideoDataOutputSampleBufferDelegate> {
BOOL _isRecording;
#private
AVCaptureSession *_session;
AVCaptureOutput *_movieFileOutput;
AVCaptureStillImageOutput *_imageFileOutput;
NSUInteger _frameIndex;
NSTimer *_timer;
NSString *_outputDirectory;
}
#end
#implementation ScreenRecorder
- (BOOL)recordDisplayImages:(CGDirectDisplayID)displayId toURL:(NSURL *)fileURL windowBounds:(CGRect)windowBounds duration:(NSTimeInterval)duration {
if (_isRecording) {
return NO;
}
_frameIndex = 0;
// Create a capture session
_session = [[AVCaptureSession alloc] init];
// Set the session preset as you wish
_session.sessionPreset = AVCaptureSessionPresetHigh;
// Create a ScreenInput with the display and add it to the session
AVCaptureScreenInput *input = [[[AVCaptureScreenInput alloc] initWithDisplayID:displayId] autorelease];
if (!input) {
[_session release];
_session = nil;
return NO;
}
if ([_session canAddInput:input]) {
[_session addInput:input];
}
input.cropRect = windowBounds;
// Create a MovieFileOutput and add it to the session
_movieFileOutput = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
[((AVCaptureVideoDataOutput *)_movieFileOutput) setVideoSettings:[NSDictionary dictionaryWithObjectsAndKeys:#(kCVPixelFormatType_32BGRA),kCVPixelBufferPixelFormatTypeKey, nil]];
// ((AVCaptureVideoDataOutput *)_movieFileOutput).alwaysDiscardsLateVideoFrames = YES;
if ([_session canAddOutput:_movieFileOutput])
[_session addOutput:_movieFileOutput];
// Start running the session
[_session startRunning];
// Delete any existing movie file first
if ([[NSFileManager defaultManager] fileExistsAtPath:[fileURL path]])
{
NSError *err;
if (![[NSFileManager defaultManager] removeItemAtPath:[fileURL path] error:&err])
{
NSLog(#"Error deleting existing movie %#",[err localizedDescription]);
}
}
_outputDirectory = [[fileURL path] retain];
[[NSFileManager defaultManager] createDirectoryAtPath:_outputDirectory withIntermediateDirectories:YES attributes:nil error:nil];
// Set the recording delegate to self
dispatch_queue_t queue = dispatch_queue_create("com.schaefer.lolz", 0);
[(AVCaptureVideoDataOutput *)_movieFileOutput setSampleBufferDelegate:self queue:queue];
//dispatch_release(queue);
if (0 != duration) {
_timer = [[NSTimer scheduledTimerWithTimeInterval:duration target:self selector:#selector(finishRecord:) userInfo:nil repeats:NO] retain];
}
_isRecording = YES;
return _isRecording;
}
- (void)dealloc
{
if (nil != _session) {
[_session stopRunning];
[_session release];
}
[_outputDirectory release];
_outputDirectory = nil;
[super dealloc];
}
- (void)stopRecording {
if (!_isRecording) {
return;
}
_isRecording = NO;
// Stop recording to the destination movie file
if ([_movieFileOutput isKindOfClass:[AVCaptureFileOutput class]]) {
[_movieFileOutput performSelector:#selector(stopRecording)];
}
[_session stopRunning];
[_session release];
_session = nil;
[_timer release];
_timer = nil;
}
-(void)finishRecord:(NSTimer *)timer
{
[self stopRecording];
}
//AVCaptureVideoDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef image = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
_frameIndex++;
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
dispatch_async(dispatch_get_main_queue(), ^{
NSURL *URL = [NSURL fileURLWithPath:[_outputDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"%d.jpg", (int)_frameIndex]]];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL((CFURLRef)URL, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(destination, image, nil);
if (!CGImageDestinationFinalize(destination)) {
NSLog(#"Failed to write image to %#", URL);
}
CFRelease(destination);
CFRelease(image);
});
}
#end
Your data isn't planar, so there is no base address for plane 0--there's no plane 0. (To be sure, you can check with CVPixelBufferIsPlanar.) You'll need CVPixelBufferGetBaseAddress to get a pointer to the first pixel. All the data will be interleaved.

Issue in writing `NSOutputStream`.

I have one basic question,
While working with NSOutputStream, should we wait for NSStreamEventHasSpaceAvailable to send the packet, so we can call , [NSOutputStream write] as and when its needed,
I believe NSStream should take care of write function...
if this is not correct, then please provide your views on following logic,
===== To Write on NSOutputStream =================
Have Queue to add packet, that to be sent
// StreamQueue.h
#interface StreamQueue : NSObject <NSCoding>
{
NSMutableArray * data;
NSRecursiveLock * theLock;
}
#pragma mark �Initialization & Deallocation�
- (id)init;
- (id)initWithQueue:(CommQueue *)queue;
- (id)initWithCoder:(NSCoder *)coder;
- (void)dealloc;
- (void)encodeWithCoder:(NSCoder *)coder;
#pragma mark
#pragma mark �Accessor Methods�
- (int)size;
- (BOOL)isEmpty;
- (id)top;
- (NSArray *)data;
#pragma mark
#pragma mark �Modifier Methods�
- (void)enqueue:(id)object;
- (id)dequeue;
- (void)removeAll;
#end
and its implementation
#import "StreamQueue.h"
#implementation StreamQueue
#pragma mark �Initialization & Deallocation�
- (id)init
{
if (self = [super init]) {
data = [[NSMutableArray alloc] init];
theLock = [[NSRecursiveLock alloc] init];
}
return self;
}
- (id)initWithQueue:(StreamQueue *)queue
{
if (self = [super init]) {
data = [[NSMutableArray alloc] initWithArray:[queue data]];
theLock = [[NSRecursiveLock alloc] init];
}
return self;
}
- (id)initWithCoder:(NSCoder *)coder
{
if (self = [super init]) {
data = [[NSMutableArray alloc] initWithArray:[coder decodeObject]];
theLock = [[NSRecursiveLock alloc] init];
}
return self;
}
- (void)dealloc
{
[data release];
[theLock release];
[super dealloc];
}
- (void)encodeWithCoder:(NSCoder *)coder;
{
[coder encodeObject:data];
}
#pragma mark
#pragma mark �Accessor Methods�
- (int)size
{
int size;
[theLock lock];
size = [data count];
[theLock unlock];
return size;
}
- (BOOL)isEmpty
{
BOOL empty;
[theLock lock];
empty = ([data count] == 0);
[theLock unlock];
return empty;
}
- (id)top
{
id object = nil;
[theLock lock];
if (![self isEmpty])
object = [data objectAtIndex:0];
[theLock unlock];
return object;
}
- (NSArray *)data
{
NSArray * array;
[theLock lock];
array = [NSArray arrayWithArray:data];
[theLock unlock];
return array;
}
#pragma mark
#pragma mark �Modifier Methods�
- (void)enqueue:(id)object
{
[theLock lock];
[data addObject:object];
[theLock unlock];
}
- (id)dequeue
{
id object = [self top];
if (object != nil) {
[theLock lock];
[object retain];
[data removeObjectAtIndex:0];
[theLock unlock];
}
return [object autorelease];
}
- (void)removeAll
{
[theLock lock];
while (![self isEmpty])
[data removeObjectAtIndex:0];
[theLock unlock];
}
#end
Now when Application have something to send over socket(NSStream), it should add it into the queue,
-(bool)sendRawData:(const uint8_t *)data length:(int)len{
// if still negotiating then don't send data
assert(!networkConnected);
NSData *pData = [NSData dataWithBytes:(const void *)data length:len];
// pToSendPacket is of type StreamQueue
[pToSendPacket enqueue:pData];
return;
}
and this piece of code when we get NSHasSpaceAvailableEvent
-(void)gotSpaceAvailable{
// is there any pending packets that to be send.
NSData *pData = (NSData *)[pToSendPacket dequeue];
if(pData == nil){
// no pending packets..
return;
}
const uint8_t *data = (const uint8_t *)[pData bytes];
int len = [pData length];
int sendlength = [pOutputStream write:data maxLength:len];
if(sendlength == -1 ){
NSError *theError = [pOutputStream streamError];
NSString *pString = [theError localizedDescription];
int errorCode = [theError code];
return ;
}
}
I was expecting Application will keep on receiving the event, whenever OutputStream sends data, but i recieved only once... :(
Please help ...
If you don't wait for the event, the write call will block until space is available. Generally you want to aim to design your code to work asychronously, so waiting for NSStreamEventHasSpaceAvailable is the best solution.
As for when you receive the space available notification, see the documentation here:
If the delegate receives an NSStreamEventHasSpaceAvailable event and
does not write anything to the stream, it does not receive further
space-available events from the run loop until the NSOutputStream
object receives more bytes. When this happens, the run loop is
restarted for space-available events. If this scenario is likely in
your implementation, you can have the delegate set a flag when it
doesn’t write to the stream upon receiving an
NSStreamEventHasSpaceAvailable event. Later, when your program has
more bytes to write, it can check this flag and, if set, write to the
output-stream instance directly.
There is no firm guideline on how many bytes to write at one time.
Although it may be possible to write all the data to the stream in one
event, this depends on external factors, such as the behavior of the
kernel and device and socket characteristics. The best approach is to
use some reasonable buffer size, such as 512 bytes, one kilobyte (as
in the example above), or a page size (four kilobytes).
So you should be getting regular NSStreamEventHasSpaceAvailable events as long as you do write data for each event.

Resources