Determining how Speex encoded audio differs from expected settings - debugging

I'm trying to integrate an application with another application that encodes audio using speex. However, when I decode audio sent from the first application to the second, I'm getting noise (not static, more like bleep-bloopy twangs).
I need to know where to look for the problem.
The first application can talk to other instances of itself. The second application can talk to other instances of itself. They just can't talk to each other.
The Speex settings are apparently mismatched, but I can't figure out which ones. I've compared the source line by line and it appears that they do the same setup. They both use narrow band mode. They both use the same parameters for enhancer (1), variable bit rate (0), quality (3), complexity (1), and sample rate (8000). The observed length of encoded frames matches, too.
In case it's any help, Here's some sample audio data, covering 6 frames from the beginning of a call (hopefully the parameters I mentioned are enough to decode it):
1dde5c800039ce70001ce7207b60000a39242d95
e8bda0cf21b6ec4629ad0f3b04290474110e70fb
1bdd3a9dfc211845e0ed90dabde11451e191186c
0ba5de5bea933ed1d3675f786947444781407e17
1bd5549fefa91b63d4968b299bf603d7e533b98c
6351b7953f4470d63bbb2b8c49be650ee89488b5
// at this point I get:
// notification: More than two wideband layers found. The stream is corrupted."
I'm at a bit of a lose. I don't know what to check next.
What are other reasons that audio data transferred from one computer to another, encoded with Speex, might end up being misinterpreted? I'm especially interested in the stupid reasons.

Self-answer: Check the entire data path from end to end, with logging at each point.
The issue we were having is that the audio was being encrypted with AES CTR mode but the apps were using different endian-ness on the counter. The first 32 bytes of audio made it through, making it seem like an encoding issue by having some non-noise, but the rest of the data was garbled.

Related

Control Chromecast buffering at start

Is there a way to control the amount of buffering CC devices do before they start playback?
My sender apps sends real time audio flac and CC waits +10s before starting to play. I've built a customer receiver and tried to change the autoPauseDuration and autoResumeDuration but it does not seem to matter. I assume it's only used when an underflow event happens, but not at startup.
I realize that forcing a start with low buffering level might endup in underflow, but that's a "risk" that is much better than always waiting such a long time before playback starts. And if it happens, the autoPause/Resume hysteresis would allow a larger re-buffering to take place then.
If you are using the Media Player Library, take a look at player.getBufferDuration. The docs cover more details about how you can customize the player behavior: https://developers.google.com/cast/docs/player#frequently-asked-questions
Finally, it turned to be a problem with the way to send audio to the default receiver. I was streaming flac, and as it is a streamable format, I did not include any header (you might be able to start anywhere in the stream, it's just a matter of finding the synchro). But the flac decoder in the CC does not like that and was talking 10+ second to start. As soon as I've added a STREAMINFO header, problem went away

IAudioClient - get notified when playback ends?

I continuously send data to IAudioClient (GetBufferSize / GetCurrentPadding / GetBuffer / ReleaseBuffer), but I want to know when the audio device finishes playing the last data I sent. I do not want to assume the player stopped just because I sent the last chunk of data to the device: it might still be playing the buffered data.
I tried using IAudioClock / IAudioClock2 to check the hardware buffer position, but it stays the same from the moment I send the last chunk.
I also don't see anything relevant in the IMMNotificationClient and IAudioSessionNotification interfaces...
What am I missing?
Thanks!
IMMNotificationClient and IAudioSessionNotification are not gonna help you, these are for detecting new devices / new application sessions respectively. As far as i know there's nothing in WASAPI that explicitly sends out an event when the last sample is consumed by the device (exclusive mode) or audio engine (shared mode). A trick I used in the past (albeit with DirectSound, but should work equally well with WASAPI) is to continuously check the available space in the audio buffer (for WASAPI, using GetCurrentPadding). After you send the last sample, immediately record the current padding, let's say it is N frames. Then keep writing zeroes to the AudioClient untill N frames have been processed (as reported by IAudioClock(2), or just guestimate using a timer), then stop the stream. Whether this works on an exclusive event-driven mode stream is a driver quality issue; the driver may choose to report the "real" playback position or just process it in chunks of full buffer size.

Decoding a picture from a gps tracker

I'am developing a server for a GPS Tracker that can send pictures taken by a camera connected to it, inside a vehicle.
The problem is that I follow every step in the manual and I can't still decode the bytes sent by the tracker into a picture:
I receive the picture in packages separated by the headers and "tails", each one. When I receive the bytes I convert them into hexadecimals as the manual expecifies, then I have to remove the headers and "tails" and apparently after joinned the remain data and saved as a .jpeg, the image should appear, but it doesn't.
the company name is "Toplovo" from China. Have anyone else solve something similar?
Are the line feeds part of your actual data? Because if so I doubt that's supposed to happen.
Otherwise, make sure you're writing the file in binary mode. In some languages this matters. You didn't really specify, but make sure you're not in text mode. Also make sure you're not using any datatypes unsuited for hexidecimal values (again, we don't even know what language you're using, so, it's kind of hard to give specific suggestions.)

WASAPI: IAudioClient->Initialize succeeds even when IAudioClient->IsFormatSupported fails with same format

I am trying to find out which output formats are supported by a specific audio device in exclusive mode.
To do this, I am using IAudioClient->IsFormatSupported(), which according to the documentation should be usable for this.
Unfortunately, it returns AUDCLNT_E_UNSUPPORTED_FORMAT for almost every format I try to pass, except for default 2-channel, 44.1khz audio.
If I actually try to initialize the audioclient, there are however formats that succeed, but which failed in IsFormatSupported().
Just trying to Initialize every format is not an option because this could result in stopping the audio from other applications.
Has anyone else seen this behavior or know if there is another way to find which formats are supported by a specific audio device?
I have seen this behavior as well. It seems like IsFormatSupported will only accept what is marked as 'supported' in the playback device settings in Windows, but Initialize seems to actually end up asking the drivers if it's indeed possible.
In my specific situation, I have a Xoxar HDAV1.3 setup to use HDMI as output. Two playback devices are always available: Speakers and S/PDIF Pass-through Device. If I try, for example, to request 6 channels for the S/PDIF playback device, IsFormatSupported will reject it (in theory, S/PDIF only supports 2, and that's all I can see in the settings), but calling Initialize will succeed and work (it goes out HDMI after all, for which 6 channels is supported). Talk about misleading device names!
I'm afraid there's no real practical way to work around this issue.

Flex 4 > spark.components.VideoPlayer > How to switch bit rate?

The VideoPlayer (possibly VideoDisplay also) component is capable of somehow automatically picking the best quality video on the list it's given. An example is here:
http://help.adobe.com/en_US/FlashPlatform/beta/reference/actionscript/3/spark/components/mediaClasses/DynamicStreamingVideoItem.html#includeExamplesSummary
I cannot find the answers to below questions.
Assuming that the server that streams recorded videos is capable of switching across same videos with different bit rates and streaming them from any point within their timelines:
Is the bandwidth test/calculation within this component only done before the video starts playing, at which point it picks the best video source and never uses the other ones? Or, does it continuously or periodically execute its bandwidth tests and does it accordingly switch between video sources during the playback?
Does it support setting the video source through code and can its automatic switching between video sources be turned off (in case I want to provide this functionality to the user in the form of some button/dropdown or similar)? I know that the preferred video source can be set, but this only means that that video source will be tested/attempted first.
What other media servers can be used with this component, besides the one provided by Adobe, to achieve automated and manual switching between different quality of same video?
Obviously, I'd like to create a player that is smart enough to automatically switch between different quality of videos, and that will support manual instructions related to which source to play - both without interrupting the playback, or at least without restarting it (minor interruptions acceptable). Also, the playback needs to be able to start at any given point within the video, after enough data has been buffered (of course), but most importantly, I want to be able to start the playback beyond what's buffered. A note or two about fast-forwarding is not going to hurt, if anyone knows anything.
Thank you for your time.

Resources