AudioQueueOutputCallback not called at first - macos

My question may be similar to this: Why might my AudioQueueOutputCallback not be called?
It seems that person was able to fix by running audio stuff on main thread. I cannot do that.
I enqueue buffers to prime audio Q, then start audio Q. Shouldn't those buffers complete immediately once I start my queue?
I am setting the data size correctly.
As a hack I just re-use buffers without waiting for them to be reported by cabllback as done. If I do this, I run for a couple of seconds like this, then the buffer callback starts working from them on.

definitely not a good idea to hack your way around with core audio.. while it may be a quick fix, it will definitely hurt you in ambiguous ways in the long run.
your problem isn't the same as the link you posted, their problem was assigning the callback on the wrong thread.. in your case, your callback is in the right thread, it's just that the audio buffers you are feeding it initially are either empty, too small or contains data not fit for audio playback
keep in mind that the purpose of the callback is to fire after each audio buffer supplied to the audio queue has been played (ie consumed).. the fact that after you start the queue the callback isn't being fired.. it means that there is nothing in the audio buffers for it to consume.. or too little meaningful information for it to consume..
when you do it manually you see a lag b/c the audio queue is trying to process the empty/erroneous buffers you supplied it.. then you resupply the same buffers with valid data that the queue eventually plays and then fires the callback
solution: compare the data you put in the buffers before starting the queue with the data you are supplying manually.. i'm sure there is a difference.. if that doesn't work please show your code for further analysis

Related

Control Chromecast buffering at start

Is there a way to control the amount of buffering CC devices do before they start playback?
My sender apps sends real time audio flac and CC waits +10s before starting to play. I've built a customer receiver and tried to change the autoPauseDuration and autoResumeDuration but it does not seem to matter. I assume it's only used when an underflow event happens, but not at startup.
I realize that forcing a start with low buffering level might endup in underflow, but that's a "risk" that is much better than always waiting such a long time before playback starts. And if it happens, the autoPause/Resume hysteresis would allow a larger re-buffering to take place then.
If you are using the Media Player Library, take a look at player.getBufferDuration. The docs cover more details about how you can customize the player behavior: https://developers.google.com/cast/docs/player#frequently-asked-questions
Finally, it turned to be a problem with the way to send audio to the default receiver. I was streaming flac, and as it is a streamable format, I did not include any header (you might be able to start anywhere in the stream, it's just a matter of finding the synchro). But the flac decoder in the CC does not like that and was talking 10+ second to start. As soon as I've added a STREAMINFO header, problem went away

IAudioClient - get notified when playback ends?

I continuously send data to IAudioClient (GetBufferSize / GetCurrentPadding / GetBuffer / ReleaseBuffer), but I want to know when the audio device finishes playing the last data I sent. I do not want to assume the player stopped just because I sent the last chunk of data to the device: it might still be playing the buffered data.
I tried using IAudioClock / IAudioClock2 to check the hardware buffer position, but it stays the same from the moment I send the last chunk.
I also don't see anything relevant in the IMMNotificationClient and IAudioSessionNotification interfaces...
What am I missing?
Thanks!
IMMNotificationClient and IAudioSessionNotification are not gonna help you, these are for detecting new devices / new application sessions respectively. As far as i know there's nothing in WASAPI that explicitly sends out an event when the last sample is consumed by the device (exclusive mode) or audio engine (shared mode). A trick I used in the past (albeit with DirectSound, but should work equally well with WASAPI) is to continuously check the available space in the audio buffer (for WASAPI, using GetCurrentPadding). After you send the last sample, immediately record the current padding, let's say it is N frames. Then keep writing zeroes to the AudioClient untill N frames have been processed (as reported by IAudioClock(2), or just guestimate using a timer), then stop the stream. Whether this works on an exclusive event-driven mode stream is a driver quality issue; the driver may choose to report the "real" playback position or just process it in chunks of full buffer size.

Editing waveform audio input before it reaches a application

I am working on a voice changer that is supposed to manipulate the input buffer of a waveform-audio input device before the buffer is returned to a application.
The waveInOpen()-function gives 4 options to be notified when the buffer provided by waveInAddBuffer() has been filled.
The options are CALLBACK_EVENT, CALLBACK_FUNCTION, CALLBACK_THREAD, CALLBACK_WINDOW.
I have tried several things to to get my waveform manipulation to work but haven't found a reliable and clean solution yet.
What worked so far was intercepting waveInAddBuffer()-calls with Detours. I am saving all WAVEHDR-pointer used by waveInAddBuffer() and each time the function is called I delay the program for a few miliseconds and search for waveform-buffers that have been filled during the delay.
This isn't reliable though because the buffer size differs for each application and therefore there isn't a delay-time that works for every application.
I would be really thankful for new ideas!
edit:
Heres the other stuff I have tried:
Most applications set multiple flags when calling waveInOpen() that actually exclude each other. So you can never be sure what callback method actually is used. (e.g.: the flags CALLBACK_EVENT | CALLBACK_FUNCTION | CALLBACK_WINDOW are all set.)
When the CALLBACK_WINDOW flag is set, I have used the SetWindowLongPtr() function to create a subclass window that received MM_WIM_DATA messages before the window of the application. Unfortunately this didn't work, my subclass window never gets called.
I have created a custom-callback function that I replace with the callback function of the application when the CALLBACK_FUNCTION flag is set.
This didn't work because my function never gets called. I guess this is because my function is defined in a DLL, outside of the address space of the application.
There were several other things I have tried that didn't work because I made attempts that never could have worked because I didn't know enough about injection and hooks. I have learned quite a lot and I cant really summarize everything I have tried, because it's not helping the cause.

libtorrent new piece alerts

I am developing an application that will stream multimedia files over torrents.
The backend needs to serve new pieces to the frontend as they arrive.
I need a mechanism to get notified when new pieces have arrived and been verified. From what I can tell, I could do this using block_finished_alerts. I would keep track of which blocks have arrived for a given piece, and read the piece when all blocks have arrived.
This solution seems kind of roundabout and I was wondering if there was a better way.
What you're asking for is called piece_finished_alert. It's posted every time a new piece completes downloading and passes the hash-check. To read a piece from disk, you may use torrent_handle::read_piece() (and get the result in read_piece_alert).
However, if you want to stream media, you probably want to use torrent_handle::set_piece_deadline() and set the flag to send read_piece_alerts as pieces come in. This will invoke the built-in streaming feature of libtorrent.

GetMessage() while the thread is blocked in SwapBuffers()

Vsync blocks SwapBuffers(), which is what I want. My problem is that, since input messages go to the same thread that owns the window, any messages that come in while SwapBuffers() is blocked won't be processed immediately, but only after the vsync triggers the buffer swap and SwapBuffers() returns. So I have all my compute threads sitting idle instead of processing the scene for rendering in the next frame using the most recent input. I'm particularly concerned with having very low latency. I need some way to access all pending input messages to the window from other threads.
Windows API provides a way to wait for either Windows events or input messages using MsgWaitForMultipleObjects(), yet there's no similar way to wait for a buffer swap together with other things. That's very unfortunate.
I considered calling SwapBuffers() in another thread, but that requires glFinish() to be called in the window's thread before signalling another thread to SwapBuffers(), and glFinish() is still a blocking call so it's not a good solution.
I considered hooking, but that also looks like a dead end. Hooking with WH_GETMESSAGE will have the GetMsgProc() be called not asynchronously, but when the window's thread calls GetMessage()/PeekMessage(), so it's no help. Installing a global hook doesn't help me either due to the need of calling RegisterTouchWindow() with a specific window handle to process WM_TOUCH -- and my input is touch. And, while for mouse and keyboard, you can install low level hooks that capture messages as they're posted to the thread's queue, rather than when the thread calls GetMessage()/PeekMessage(), there appears to be no similar option for touch.
I also looked at wglDelayBeforeSwapNV(), but I don't see what's preventing the OS from preempting a thread sometimes after the call to that function but before SwapBuffers(), causing a miss of the next vsync signal.
So what's a good workaround? Can I make a second, invisible window, that will somehow be always the active one and so get all input messages, while the visible one is displaying the rendering? According to another discussion, message-only windows (CreateWindow with HWND_MESSAGE) are not compatible with WM_TOUCH. Is there perhaps some undocumented event that SwapBuffers() is internally waiting on that I could access and feed to MsgWaitForMultipleObjects()? My target is a fixed platform (Windows 8.1 64-bit) so I'm fine with using undocumented functionality, should it exist. I do want to avoid writing my own touchscreen driver, however.
Out of curiosity, why not implement your entire drawing logic in that other thread? It appears the problem you are running into is that the message pump is driven by the same thread that draws. Since Windows does not let you drive the message pump from a different thread than the one that created the window, the easiest solution would just be to push all the GL stuff into a different thread.
SwapBuffers (...) is also not necessarily going to block. As-per requirements of VSYNC an implementation need only block the next command that would modify the backbuffer while all backbuffers are pending a swap. Triple buffering changes things up a little bit by introducing a second backbuffer.
One possible implementation of triple buffering will discard the oldest backbuffer when it comes time to swap, thus SwapBuffers (...) would never cause blocking (this is effectively how modern versions of Windows work in windowed mode with the DWM enabled). Other implementations will eventually present both backbuffers, this reduces (but does not eliminate) blocking but also results in the display of late frames.
Unfortunately WGL does not let you request the number of backbuffers in a swap-chain (beyond 0 single-buffered or 1 double-buffered); the only way to get triple buffering on Windows is using driver settings. Lowest message latency will come from driving GL in a different thread, but triple buffering can help a little bit while requiring no effort on your part.

Resources