glResolveMultisampleFramebufferAPPLE() generate GL_INVALID_OPERATION on iOS (OpenGL ES 2.0) - opengl-es

I want to use function glReadPixels() to do screenshot of my scene. And it works great if I don't use multisampling. But if I do I get GL_INVALID_OPERATION in glResolveMultisampleFramebufferAPPLE(). Is there a way to resolve this problem?
My save function:
var wid = GLint()
var hei = GLint()
glGetRenderbufferParameteriv(GLenum(GL_RENDERBUFFER), GLenum(GL_RENDERBUFFER_WIDTH), &wid)
glGetRenderbufferParameteriv(GLenum(GL_RENDERBUFFER), GLenum(GL_RENDERBUFFER_HEIGHT), &hei)
let byteLength = Int(hei * wid) * 4
let bytes = UnsafeMutablePointer<GLubyte>.alloc(byteLength)
// init non-multisampled frame buffer
var framebuffer: GLuint = 0
var colorRenderbuffer: GLuint = 0
glGenFramebuffersOES(1, &framebuffer)
glBindFramebufferOES(GLenum(GL_FRAMEBUFFER_OES), framebuffer)
glGenRenderbuffersOES(1, &colorRenderbuffer)
glBindRenderbufferOES(GLenum(GL_RENDERBUFFER_OES), colorRenderbuffer)
glRenderbufferStorageOES(GLenum(GL_RENDERBUFFER_OES), GLenum(GL_RGBA8_OES), wid, hei)
glFramebufferRenderbufferOES(GLenum(GL_FRAMEBUFFER_OES), GLenum(GL_COLOR_ATTACHMENT0_OES), GLenum(GL_RENDERBUFFER_OES), colorRenderbuffer)
glBindFramebufferOES(GLenum(GL_DRAW_FRAMEBUFFER_APPLE), framebuffer)
var default: GLint = 0
glGetIntegerv(GLenum(GL_FRAMEBUFFER_BINDING_OES), &default)
glBindFramebufferOES(GLenum(GL_READ_FRAMEBUFFER_APPLE), GLuint(default));
myglGetError() // OK
glResolveMultisampleFramebufferAPPLE()
myglGetError() // GL_INVALID_OPERATION
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), framebuffer)
glReadPixels(0, 0, GLsizei(wid), GLsizei(hei), GLenum(GL_RGBA), GLenum(GL_UNSIGNED_BYTE), bytes)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), GLuint(default));
glDeleteFramebuffers(1, &framebuffer)
I use default frame buffer initialized by GLKit with glkView.drawableMultisample = GLKViewDrawableMultisample.Multisample4X

I have tried your sample and it seems that after some modifications it works.
Modificated code:
var wid = GLint()
var hei = GLint()
glGetRenderbufferParameteriv(GLenum(GL_RENDERBUFFER), GLenum(GL_RENDERBUFFER_WIDTH), &wid)
glGetRenderbufferParameteriv(GLenum(GL_RENDERBUFFER), GLenum(GL_RENDERBUFFER_HEIGHT), &hei)
var def: GLint = 0
glGetIntegerv(GLenum(GL_FRAMEBUFFER_BINDING_OES), &def)
// init non-multisampled frame buffer
var framebuffer: GLuint = 0
var colorRenderbuffer: GLuint = 0
glGenFramebuffersOES(1, &framebuffer)
glBindFramebufferOES(GLenum(GL_FRAMEBUFFER_OES), framebuffer)
glGenRenderbuffersOES(1, &colorRenderbuffer)
glBindRenderbufferOES(GLenum(GL_RENDERBUFFER_OES), colorRenderbuffer)
glRenderbufferStorageOES(GLenum(GL_RENDERBUFFER_OES), GLenum(GL_RGBA8_OES), wid, hei)
glFramebufferRenderbufferOES(GLenum(GL_FRAMEBUFFER_OES), GLenum(GL_COLOR_ATTACHMENT0_OES), GLenum(GL_RENDERBUFFER_OES), colorRenderbuffer)
glBindFramebufferOES(GLenum(GL_DRAW_FRAMEBUFFER_APPLE), framebuffer)
//commented
//here GL_FRAMEBUFFER_BINDING_OES will be overrided by previous call of
// 'glBindRenderbufferOES(GLenum(GL_RENDERBUFFER_OES), colorRenderbuffer)'
//var def: GLint = 0
//glGetIntegerv(GLenum(GL_FRAMEBUFFER_BINDING_OES), &def
glBindFramebufferOES(GLenum(GL_READ_FRAMEBUFFER_APPLE), GLuint(def));
var err = glGetError()
print(String(format: "Error %X", err))
glResolveMultisampleFramebufferAPPLE()
err = glGetError()
print(String(format: "Error %X", err)) // GL_INVALID_OPERATION
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), framebuffer)
Also here is quote from APPLE_framebuffer_multisample.txt extension description which explains why modified code works, as far as I understand.
Calling
BindFramebuffer with set to FRAMEBUFFER binds the
framebuffer to both DRAW_FRAMEBUFFER_APPLE and READ_FRAMEBUFFER_APPLE.
APPLE_framebuffer_multisample

Related

How to simply render ID3D11Texture2D

I am doing a D2D/D3D interoperability program. After getting ID3D11Texture2D from the outside, it is rendered in a designated area. I tried CreateDxgiSurfaceRenderTarget but no effect
code below, The code runs "normally", there are no errors and debugging information is displayed, But the interface shows a black screen. If you ignore the param texture, changing to only _back_render_target->FillRectange is effective
HRESULT CGraphRender::DrawTexture(ID3D11Texture2D* texture, const RECT& dst_rect)
{
float dpi = GetDpiFromD2DFactory(_d2d_factory);
CComPtr<ID3D11Texture2D> temp_texture2d;
CComPtr<ID2D1RenderTarget> temp_render_target;
CComPtr<ID2D1Bitmap> temp_bitmap;
D3D11_TEXTURE2D_DESC desc = { 0 };
texture->GetDesc(&desc);
CD3D11_TEXTURE2D_DESC capture_texture_desc(DXGI_FORMAT_B8G8R8A8_UNORM, desc.Width, desc.Height, 1, 1, D3D11_BIND_RENDER_TARGET);
HRESULT hr = _d3d_device->CreateTexture2D(&capture_texture_desc, nullptr, &temp_texture2d);
RETURN_ON_FAIL(hr);
CComPtr<IDXGISurface> dxgi_capture;
hr = temp_texture2d->QueryInterface(IID_PPV_ARGS(&dxgi_capture));
RETURN_ON_FAIL(hr);
D2D1_RENDER_TARGET_PROPERTIES rt_props = D2D1::RenderTargetProperties(D2D1_RENDER_TARGET_TYPE_DEFAULT, D2D1::PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED), dpi, dpi);
hr = _d2d_factory->CreateDxgiSurfaceRenderTarget(dxgi_capture, rt_props, &temp_render_target);
RETURN_ON_FAIL(hr);
D2D1_BITMAP_PROPERTIES bmp_prop = D2D1::BitmapProperties(D2D1::PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED), dpi, dpi);
hr = temp_render_target->CreateBitmap(D2D1::SizeU(desc.Width, desc.Height), bmp_prop, &temp_bitmap);
RETURN_ON_FAIL(hr);
CComPtr<ID3D11DeviceContext> immediate_context;
_d3d_device->GetImmediateContext(&immediate_context);
if (!immediate_context)
{
return E_UNEXPECTED;
}
immediate_context->CopyResource(temp_texture2d, texture);
D2D1_POINT_2U src_point = D2D1::Point2U();
D2D1_RECT_U src_rect = D2D1::RectU(0, 0, desc.Width, desc.Height);
hr = temp_bitmap->CopyFromRenderTarget(&src_point, temp_render_target, &src_rect);
RETURN_ON_FAIL(hr);
D2D1_RECT_F d2d1_rect = { (float)dst_rect.left, (float)dst_rect.top, (float)dst_rect.right, (float)dst_rect.bottom};
_back_render_target->DrawBitmap(temp_bitmap, d2d1_rect);
return S_OK;
}
Refer to DirectXTK, DirectXTK's SpritBatch code has a brief look. It is necessary to introduce a bunch of context settings. I don’t understand the relationship yet. I don’t know. Does the existing code have any influence, such as setting TargetView, ViewPort or even Shader.
Is there a relatively simple and effective method, as simple and violent as ID2D1RenderTarget's DrawBitmap?

Failure to create EGLSurface using the RenderResolutionScale property on Windows

I'm trying to create an EGLSurface in a Windows UWP app. The creation code is in a xaml.cpp file, as shown below.
When I try creating the surface using the optional property EGLRenderResolutionScaleProperty, it fails with an EGL_BAD_ALLOC error. Two alternate approaches work, but I need to try to use the resolution scale option for my app.
void MyClass::CreateRenderSurface()
{
if (mRenderSurface == EGL_NO_SURFACE)
{
// NOTE: in practice, I only have one of the three following implementations in the code;
// all are included together here for ease of comparison.
// 1. This works
mRenderSurface = CreateSurface(mSwapChainPanel, nullptr, nullptr);
// 2. and this works (here I hardwired the size to twice the
// the size of the window I happen to be using, because
// Windows display settings is set at 200%)
Size size;
size.Height = 1448; // hardwired value for testing, in this case window height is 724 pix
size.Width = 1908; // hardwired value for testing, in this case window width is 954 pix
mRenderSurface = CreateSurface(mSwapChainPanel, &size, nullptr);
// 3. but this fails (and this is the one I want to use)
float resolutionScale = 1.0;
mRenderSurface = CreateSurface(mSwapChainPanel, nullptr, &resolutionScale);
}
}
EGLSurface MyClass::CreateSurface(SwapChainPanel^ panel, const Size* renderSurfaceSize, const float* resolutionScale)
{
if (!panel)
{
throw Exception::CreateException(E_INVALIDARG, L"SwapChainPanel parameter is invalid");
}
if (renderSurfaceSize != nullptr && resolutionScale != nullptr)
{
throw Exception::CreateException(E_INVALIDARG, L"A size and a scale can't both be specified");
}
EGL _egl = this->HelperClass->GetEGL();
EGLSurface surface = EGL_NO_SURFACE;
const EGLint surfaceAttributes[] =
{
EGL_ANGLE_SURFACE_RENDER_TO_BACK_BUFFER, EGL_TRUE,
EGL_NONE
};
// Create a PropertySet and initialize with the EGLNativeWindowType.
PropertySet^ surfaceCreationProperties = ref new PropertySet();
surfaceCreationProperties->Insert(ref new String(EGLNativeWindowTypeProperty), panel);
// If a render surface size is specified, add it to the surface creation properties
if (renderSurfaceSize != nullptr)
{
surfaceCreationProperties->Insert(ref new String(EGLRenderSurfaceSizeProperty), PropertyValue::CreateSize(*renderSurfaceSize));
}
// If a resolution scale is specified, add it to the surface creation properties
if (resolutionScale != nullptr)
{
surfaceCreationProperties->Insert(ref new String(EGLRenderResolutionScaleProperty), PropertyValue::CreateSingle(*resolutionScale));
}
surface = eglCreateWindowSurface(_egl._display, _egl._config, reinterpret_cast<IInspectable*>(surfaceCreationProperties), surfaceAttributes);
EGLint err = eglGetError();
if (surface == EGL_NO_SURFACE)
{
throw Exception::CreateException(E_FAIL, L"Failed to create EGL surface");
}
return surface;
}
where
const wchar_t EGLNativeWindowTypeProperty[] = L"EGLNativeWindowTypeProperty";
const wchar_t EGLRenderSurfaceSizeProperty[] = L"EGLRenderSurfaceSizeProperty";
const wchar_t EGLRenderResolutionScaleProperty[] = L"EGLRenderResolutionScaleProperty";
I have tried changing the cast of the EGLNativeWindowType argument (as in How to create EGLSurface using C++/WinRT and ANGLE?) - that only creates other problems. As indicated, this code does work to create a surface in the basic case, just not when using the EGLRenderResolutionScaleProperty.
My guess is that something about the way I'm supplying that property is failing, because it fails on what should be reasonable values (e.g., 1.0).
Solved this by first checking that swapChainPanel size is not zero:
void MyClass::CreateRenderSurface()
{
if (mRenderSurface == EGL_NO_SURFACE)
{
if (0 == mSwapChainPanel->ActualHeight || 0 == mSwapChainPanel->ActualWidth)
{
mRenderSurface = CreateSurface(mSwapChainPanel, nullptr, &resolutionScale);
}
}
}
(The code checks elsewhere whether the render surface has been created, and will call this again if needed.)
Interestingly, the original code that used nullptr for both size and resolution arguments (case 1 in original snippet above) didn't need that check.

Use a texture array as Direct2D surface render target

I try to create a Direct3D 11 texture array holding multiple pages of text rendered using DirectWrite and Direct2D. Suppose layout holds the IDWriteTextLayouts for the individual pages, then I try to do the following:
{
D3D11_TEXTURE2D_DESC desc;
::ZeroMemory(&desc, sizeof(desc));
desc.ArraySize = static_cast<UINT>(layouts.size());
desc.BindFlags = D3D11_BIND_RENDER_TARGET;
desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc.Height = height;
desc.MipLevels = 1;
desc.SampleDesc.Count = 1;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.Width = width;
auto hr = this->_d3dDevice->CreateTexture2D(&desc, nullptr, &retval.Texture);
if (FAILED(hr)) {
throw std::system_error(hr, com_category());
}
}
for (auto &l : layouts) {
ATL::CComPtr<IDXGISurface> surface;
{
auto hr = retval.Texture->QueryInterface(&surface);
if (FAILED(hr)) {
// The code fails here with E_NOINTERFACE "No such interface supported."
throw std::system_error(hr, com_category());
}
}
// Go on creating the RT from 'surface'.
}
The problem is that the code fails at the designated line where I try to obtain the IDXGISurface interface from the ID3D11Texture2D if there is more than one page (desc.ArraySize > 1). I eventually found in the documentation (https://learn.microsoft.com/en-us/windows/win32/api/dxgi/nn-dxgi-idxgisurface) that this is by deisgn:
If the 2D texture [...] does not consist of an array of textures, QueryInterface succeeds and returns a pointer to the IDXGISurface interface pointer. Otherwise, QueryInterface fails and does not return the pointer to IDXGISurface.
Is there any other way to obtain the individual DXGI surfaces in the texture array to draw to them one after the other using Direct2D?
As I could not find any way to address the sub-surfaces, I now create a staging texture with one layer to render to and copy the result into the texture array using ID3D11DeviceContext::CopySubresourceRegion.
How about Texture => IDXGIResource1 => CreateSubresourceSurface ?

Very slow framerate with AVFoundation and Metal in MacOS

I'm trying to adapt Apple's AVCamFilter sample to MacOS. The filtering appears to work, but rendering the processed image through Metal gives me a framerate of several seconds per frame. I've tried different approaches, but have been stuck for a long time.
This is the project AVCamFilterMacOS - Can anyone with better knowledge of AVFoundation with Metal tell me what's wrong? I've been reading the documentation and practicing getting the unprocessed image to display, as well as rendering other things like models to the metal view but I can't seem to get the processed CMSampleBuffer to render at a reasonable framerate.
Even if I skip the renderer and send the videoPixelBuffer to the metal view directly, the view's performance is pretty jittery.
Here is some of the relevant rendering code I'm using in the controller:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
processVideo(sampleBuffer: sampleBuffer)
}
func processVideo(sampleBuffer: CMSampleBuffer) {
if !renderingEnabled {
return
}
guard let videoPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer),
let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) else {
return
}
if !self.videoFilter.isPrepared {
/*
outputRetainedBufferCountHint is the number of pixel buffers the renderer retains. This value informs the renderer
how to size its buffer pool and how many pixel buffers to preallocate. Allow 3 frames of latency to cover the dispatch_async call.
*/
self.videoFilter.prepare(with: formatDescription, outputRetainedBufferCountHint: 3)
}
// Send the pixel buffer through the filter
guard let filteredBuffer = self.videoFilter.render(pixelBuffer: videoPixelBuffer) else {
print("Unable to filter video buffer")
return
}
self.previewView.pixelBuffer = filteredBuffer
}
And from the renderer:
func render(pixelBuffer: CVPixelBuffer) -> CVPixelBuffer? {
if !isPrepared {
assertionFailure("Invalid state: Not prepared.")
return nil
}
var newPixelBuffer: CVPixelBuffer?
CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, outputPixelBufferPool!, &newPixelBuffer)
guard let outputPixelBuffer = newPixelBuffer else {
print("Allocation failure: Could not get pixel buffer from pool. (\(self.description))")
return nil
}
guard let inputTexture = makeTextureFromCVPixelBuffer(pixelBuffer: pixelBuffer, textureFormat: .bgra8Unorm),
let outputTexture = makeTextureFromCVPixelBuffer(pixelBuffer: outputPixelBuffer, textureFormat: .bgra8Unorm) else {
return nil
}
// Set up command queue, buffer, and encoder.
guard let commandQueue = commandQueue,
let commandBuffer = commandQueue.makeCommandBuffer(),
let commandEncoder = commandBuffer.makeComputeCommandEncoder() else {
print("Failed to create a Metal command queue.")
CVMetalTextureCacheFlush(textureCache!, 0)
return nil
}
commandEncoder.label = "Rosy Metal"
commandEncoder.setComputePipelineState(computePipelineState!)
commandEncoder.setTexture(inputTexture, index: 0)
commandEncoder.setTexture(outputTexture, index: 1)
// Set up the thread groups.
let width = computePipelineState!.threadExecutionWidth
let height = computePipelineState!.maxTotalThreadsPerThreadgroup / width
let threadsPerThreadgroup = MTLSizeMake(width, height, 1)
let threadgroupsPerGrid = MTLSize(width: (inputTexture.width + width - 1) / width,
height: (inputTexture.height + height - 1) / height,
depth: 1)
commandEncoder.dispatchThreadgroups(threadgroupsPerGrid, threadsPerThreadgroup: threadsPerThreadgroup)
commandEncoder.endEncoding()
commandBuffer.commit()
return outputPixelBuffer
}
func makeTextureFromCVPixelBuffer(pixelBuffer: CVPixelBuffer, textureFormat: MTLPixelFormat) -> MTLTexture? {
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
// Create a Metal texture from the image buffer.
var cvTextureOut: CVMetalTexture?
CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache, pixelBuffer, nil, textureFormat, width, height, 0, &cvTextureOut)
guard let cvTexture = cvTextureOut, let texture = CVMetalTextureGetTexture(cvTexture) else {
CVMetalTextureCacheFlush(textureCache, 0)
return nil
}
return texture
}
And finally the metal view:
override func draw(_ rect: CGRect) {
var pixelBuffer: CVPixelBuffer?
var mirroring = false
var rotation: Rotation = .rotate0Degrees
syncQueue.sync {
pixelBuffer = internalPixelBuffer
mirroring = internalMirroring
rotation = internalRotation
}
guard let drawable = currentDrawable,
let currentRenderPassDescriptor = currentRenderPassDescriptor,
let previewPixelBuffer = pixelBuffer else {
return
}
// Create a Metal texture from the image buffer.
let width = CVPixelBufferGetWidth(previewPixelBuffer)
let height = CVPixelBufferGetHeight(previewPixelBuffer)
if textureCache == nil {
createTextureCache()
}
var cvTextureOut: CVMetalTexture?
CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
textureCache!,
previewPixelBuffer,
nil,
.bgra8Unorm,
width,
height,
0,
&cvTextureOut)
guard let cvTexture = cvTextureOut, let texture = CVMetalTextureGetTexture(cvTexture) else {
print("Failed to create preview texture")
CVMetalTextureCacheFlush(textureCache!, 0)
return
}
if texture.width != textureWidth ||
texture.height != textureHeight ||
self.bounds != internalBounds ||
mirroring != textureMirroring ||
rotation != textureRotation {
setupTransform(width: texture.width, height: texture.height, mirroring: mirroring, rotation: rotation)
}
// Set up command buffer and encoder
guard let commandQueue = commandQueue else {
print("Failed to create Metal command queue")
CVMetalTextureCacheFlush(textureCache!, 0)
return
}
guard let commandBuffer = commandQueue.makeCommandBuffer() else {
print("Failed to create Metal command buffer")
CVMetalTextureCacheFlush(textureCache!, 0)
return
}
guard let commandEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: currentRenderPassDescriptor) else {
print("Failed to create Metal command encoder")
CVMetalTextureCacheFlush(textureCache!, 0)
return
}
commandEncoder.label = "Preview display"
commandEncoder.setRenderPipelineState(renderPipelineState!)
commandEncoder.setVertexBuffer(vertexCoordBuffer, offset: 0, index: 0)
commandEncoder.setVertexBuffer(textCoordBuffer, offset: 0, index: 1)
commandEncoder.setFragmentTexture(texture, index: 0)
commandEncoder.setFragmentSamplerState(sampler, index: 0)
commandEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4)
commandEncoder.endEncoding()
// Draw to the screen.
commandBuffer.present(drawable)
commandBuffer.commit()
}
All of this code is in the linked project
Capture device delegates don't own the sample buffers they receive in their callbacks, so it's incumbent on the receiver to make sure they're retained for as long as their contents are needed. This project doesn't currently ensure that.
Rather, by calling CMSampleBufferGetImageBuffer and wrapping the resulting pixel buffer in a texture, the view controller is allowing the sample buffer to be released, meaning that future operations on its corresponding pixel buffer are undefined.
One way to ensure the sample buffer lives long enough to be processed is to add a private member to the camera view controller class that retains the most-recently received sample buffer:
private var sampleBuffer: CMSampleBuffer!
and then set this member in the captureOutput(...) method before calling processVideo. You don't even have to reference it further; the fact that it's retained should prevent the stuttery and unpredictable behavior you're seeing.
This solution may not be perfect, since it retains the sample buffer for longer than strictly necessary in the event of a capture session interruption or other pause. You can devise your own scheme for managing object lifetimes; the important thing is to ensure that the root sample buffer object sticks around until you're done with any textures that refer to its contents.

Audio Units: getting generated audio into a render callback

I've been trying without success to manipulate the samples produced by kAudioUnitType_Generator audio units by attaching an AURenderCallbackStruct to the input of the audio unit right after. I managed to get this working on OS X using the following simple graph:
(input callback) -> multichannel mixer -> (intput callback) -> default output
But I've failed with the following (even simpler) graphs that start with a generator unit:
speech synthesis -> (intput callback) -> default output | fails in render callback with kAudioUnitErr_Uninitialized
audiofile player -> (intput callback) -> default output | fails when scheduling file region with kAudioUnitErr_Uninitialized
I've tried just about everything I can think of, from setting ASBD format to sample rates, but I always get these errors. Does anyone know how to setup a graph where we can manipulate samples from these nice generator units?
Below is the failing render callback function and graph instantiation method for the attempt using speech synthesis. The audiofile player is almost identical for this, except setting up the file playback, of course. Both of these setups work if I remove the callback and add an AUGraphConnectNodeInput in it's place...
static OSStatus RenderCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AppDelegate *app = (AppDelegate *)inRefCon;
AudioUnit inputUnit = app->_speechUnit;
OSStatus status = noErr;
status = AudioUnitRender(inputUnit, ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);
// *** ERROR *** kAudioUnitErr_Uninitialized, code: -10867
// ... insert processing code here...
return status;
}
- (int)createSynthGraph {
AUGRAPH_CHECK( NewAUGraph( &_graph ) );
AUNode speechNode, outputNode;
// speech synthesizer
AudioComponentDescription speechCD = {0};
speechCD.componentType = kAudioUnitType_Generator;
speechCD.componentSubType = kAudioUnitSubType_SpeechSynthesis;
speechCD.componentManufacturer = kAudioUnitManufacturer_Apple;
// output device (speakers)
AudioComponentDescription outputCD = {0};
outputCD.componentType = kAudioUnitType_Output;
outputCD.componentSubType = kAudioUnitSubType_DefaultOutput;
outputCD.componentManufacturer = kAudioUnitManufacturer_Apple;
AUGRAPH_CHECK( AUGraphAddNode( _graph, &outputCD, &outputNode ) );
AUGRAPH_CHECK( AUGraphAddNode( _graph, &speechCD, &speechNode ) );
AUGRAPH_CHECK( AUGraphOpen( _graph ) );
AUGRAPH_CHECK( AUGraphNodeInfo( _graph, outputNode, NULL, &_outputUnit ) );
AUGRAPH_CHECK( AUGraphNodeInfo( _graph, speechNode, NULL, &_speechUnit ) );
// setup stream formats:
AudioStreamBasicDescription streamFormat = [self streamFormat];
AU_CHECK( AudioUnitSetProperty( _speechUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &streamFormat, sizeof(streamFormat) ) );
// setup callback:
AURenderCallbackStruct callback;
callback.inputProc = RenderCallback;
callback.inputProcRefCon = self;
AUGRAPH_CHECK( AUGraphSetNodeInputCallback ( _graph, outputNode, 0, &callback ) );
// init and start
AUGRAPH_CHECK( AUGraphInitialize( _graph ) );
AUGRAPH_CHECK( AUGraphStart( _graph ) );
return 0;
}
You must connect these nodes together with AUGraphConnectNodeInput. The AUGraph will not initialize AudioUnits within it unless they are connected. Units in a graph that are not connected to other units will not be initialized.
You could also try manually initing them before starting the graph.
kAudioUnitErr_Uninitialized error appears when your audio unit is not initialized. Just initialize your graph before setting problem properties. This will initialize all opened audio units in graph. From AUGraphInitialize discussion in AUGraph.h:
AudioUnitInitialize() is called on each opened node/AudioUnit
(get ready to render) and SubGraph that are involved in a
interaction.

Resources