Delay in AUGraph callback - cocoa

We are developing a music player app for Lion OSX(10.7), which applies different audio effects to selected music file.
We have used Audio unit and AUGraph APi's to achieve this.
However after connecting all the audio unit node , when we call AUGraphStart(mGraph) graph takes around 1 sec to invoke first I/o callback.
Because of this there is slight delay in the beginning of the playback.
How can we avoid this delay?Could any one provide any imputs to help us solve this issue?

One solution is to start the audio graph running before displaying any UI that the user could use to start playback. Since the audio units will then be running, you could fill any audio output buffers with silence before the appropriate UI event. If the buffers are small/short, the latency from any UI event till an output buffer is filled may be small enough to be below normal human perception.

Related

How does Windows Media Foundation Media Source or Source Reader handle overrun?

I've implemented a UVC video viewing application using the source reader in async mode (OnReadSample()). The connected camera produces raw10 frames and can display just the raw images or perform additional processing (within OnReadSample() callback) and display the generated output as well (i.e., two viewers). The two images are displayed correctly with exception of a lag (i.e., camera to display) due the additional processing time being greater than the frame rate (1/FPS).
How does the Media Source handle an overrun scenario? My understanding (please correct if wrong) is new MFSamples (i.e. image containers) are created and queued, but I've yet to find info on what happens when the queue depth is reached.
Can the Media Source queue depth be set to a particular number?
Some additional system details:
Win 10
Direct3D9
Thanks,
Steve.

IMFMediaEngine duplicate player surface

How to use IMFMediaEngine for playing one video simultaneously in two areas or windows?
The IMFMediaEngineClassFactory::CreateInstance method has frame server mode and rendering mode. The rendering mode creates single video output provided by Window HWND or DirectComposition
Does it mean that I need to use frame server mode? And how to do that for making two outputs? Also I need async output, for video won't be interrupted by main thread.

How to capture screen activities in windows phone 8?

i'm new to windows phone 8 and need your help to capture screen activities in a video. I've to make a video of the activities performing on screen?
one solution to this that strike in me is to capture the screen in image form by dispatching a timer at a instance of time but this is not a right way to do as i've to make a video of screen activities? suggest your opinion how to handle this problem.
There's no built in way of doing what you want.
You will need two things:
Do a dispatch timer as you describe
Find code that will encode these frames into a movie. That's not an API that the phone supports - you will need to find existing code and use it. I am not aware of such code existing, but I have only looked for it once or twice and not very hard. You could, potentially, create an MJPG which is a fairly simple video format, but even that's not trivial and the ending file size can be prohibitive.

WP7 simultaneous recording and playing sound

I need to play a short sound repeatedly (simulating metronome) while recording sound.
What I did for the metronome was basically setting a DispatcherTimer with specific Interval, and every tick firing a SoundEffect. For the recorder I call the XNA's FrameworkDispatcher.Update method every 33 milisec (also using DispatcherTimer for that).
I run the metronome, it works fine, and then when I begin to record - there's a short break in playing sound (hard to say if it delays the Interval or just mutes the sound), and after a while (when already recording), the metronome continues to tick, but with more 'flatten' sound.
Is this a hardware limitation, or am I doing something wrong?
I thinkt this is connected with hardware. I was making an app to modify sound when it is captured. When I was using headset (with mic) connected to device there was big echo on playback. When I was using only headphones (and device mic) everything was ok. It was tested on HTC and Nokia - same results but HTC was little bit better :)

QTKit audio mute and Stream Buffering status message

I'm muting volume in QTkit like this:
if
([muteButton state] == NSOnState){
[mMovieVolumeSlider setFloatValue:0.1];
[testMovie setVolume:0.1];
The problem is the volume attenuation is sudden and abrupt. How can I implement a fade effect to the volume attenuation?
Also - my app runs .pls audio stream files. I have the .pls files embedded in the bundle. When selecting a stream within the app, a short delay is common before the stream begins to play. I want to display some sort of status message ("Buffering" or Connecting") during this short delay prior to connecting. When the stream begins the status message would end. Any idea's on how to approach this?
thanks for the help.
-paul.
I'll just outline my answers to your two questions as suggestions:
What you want to accomplish sounds much like a good fit for NSAnimation (either through subclassing or delegation — animation:valueForProgress: will probably be your friend here).
Open the QTMovie asynchronously and listen for the QTMovieLoadStateDidChangeNotification.
HTH
Daniel

Resources