For a project due to perform soon enough I happen to have a problem. Task is to play 4k PRORES 422 files according to a sequence that is written on a file (XML) while listening on a OSC port for CUE signals and some feedback to the operator. Player also can smooth the speed +- 15% and has a general fader, and sends back to OSC controller few data to update the performer.
The playback is now unpredictably un-smooth And I don't know why.
I also tend to think is not a problem of hardware: Machine used is a mac pro 2014 with 10.9 (https://www.apple.com/mac-pro/specs/) with 64mb Ram, all data on SDD and a hell of a graphic card. The un-smoothness in playback is rather unpredictable, random frame drops in different places. I tried to use external time on the player and is a bit better but still not satisfactory. I am going to package it without the editor in a app, but on preliminary test is not that faster.
I wander also what is the best way to examine the code for leaks...
Playback of files in quicktime player uses 20% of cpu and in quartz composer +90%
I am stuck on this issue, having done all the obvious things I think, and would like at least to understand how to profile the performance of the patch to find what is wrong and were.
Suggestions are welcome and thanks for help!
If its not interactive you could try rendering it in quicktime player.
Related
I'm trying to make a kind of sound application repurposing an old ADSL Router. At this point I already have compiled a kernel with Sound Supporting and Alsa drivers and also compiled alsa-lib and portaudio libraries.
Now I'm making some tests with c++ to know what kind of things I can do in the system (I dream with basic DSP, but the principal wish is to make something to record and play the recorded files). The thing is that I have troubles with something very basic like playing a sine wave. I'm using the paex_sine.c example from http://portaudio.com/docs/v19-doxydocs/group__examples__src.html but when played it has some drops like jitter.
Of course, the first thing that I made was trying to tune the device settings (Buffer size, Bits depth, Sample rate and Device Latency). I'm doing the tests with 2 different sound cards. One is a cheap usb card (the cheapest card of the market), but also did the tests with a Zoom H1 Recorderd, that has USB Sound card functionality, Class Compliant. In both cases I have the same results.
I also tested playing a .wav file with this example https://github.com/hosackm/wavplayer/blob/master/src/wavplay.c but same result. I attach the original and the jittered one.
The hardware is an old Huawei ADSL router. It use an RT63365e 500MHz (surprising with 4 cores) but is working at 420MHz. It also has a 32MB RAM.
I'm pretty sure that there is not a resources issue because the CPU is used at max 11% by the other processes/system. Also have 5MB of RAM free.
So I don't know what could be the source of the issue. It's my first time compiling the Linux Kernel and modding an Embedded System. Do you think in any point that I could research for trying to fix the problem? Maybe I'm missing some kernel configuration at building stage. I read al menuconfig options and I didn't find anything like audio priority or something like that.
Original audio: https://vocaroo.com/1i0Rhyrmyhzf
Audio with jitter: https://vocaroo.com/15l3p2wL5AhH
An image with jitter evidence in the file waveform
I'm programming video capturing app and need to have 2 input sources (USB cams) to record from at the same time.
When I record only the raw footage simultaneously without compression at is working quite well (Low CPU load, no video lags), but when the compression is turned on the CPU is very high and the footage is lagging.
How to solve it? Or how to tune-up the settings so that it can be accomplished?
Note: the Raw streams are to big and thus cannot be used, otherwise I would not bother with compression at all and just leave it as it is.
The AVFoundation framework in its current configuration is setup to provide HW acceleration only for one source at time. For multiple accelerated sources one need to go deeper to VideoToolbox framework and even deeper.
I am getting a latency that seems dependent on the computer, between AVFoundation and the simple [NSSound play].
My program is playing one video track and 3 audio tracks arranged inside an AVPlayer. This is working nicely. Independently, the program generates a metronome for each beat of the measure, following information from the music score. The two metronome sounds are very short files that I load in an NSSound and use [NSSound play] to play them. I noticed that I had to shift the metronome playback of about 90 milliseconds so that it is perfectly synchronized. Part of it may be the exact moment when the impact of the metronome is located in the metronome file, but if that was the only reason, then this delay would be the same on all Mac computers. However, on different Macs, this delay must be adjusted. As it is a metronome beat synchronized with the music, it is quite critical, as a slight shift makes it sound off beat. Is there any way to calculate this delay directly from AVFoundation API ? Or to compensate it or to play the metronome in another way so that there is no delay between the AVPlayer and the NSSound play ? I would appreciate some link or idea about this.
Thanks !
Dominique
Arpege Music, Belgium
I suggest looking into a low level audio library to manage and instantly play your music. BASS is a low level library built upon Audio Units which allows for refined, precise and fast control over your stream. By manipulating your buffer, and possibly creating a mixer stream (Refer to the docs) you should be able to instantly play the sound on any device. Specifically, look into buffering the sound before hand and keeping it in memory since it's a short sound.
I have a C++ application that receives a timestamped audio stream and attempts to play the audio samples as close as possible to the specified timestamp. To do so I need to know the delay (with reasonable accuracy) from when I place the audio samples in the output buffer until the audio is actually heard.
There are many discussions about audio output latency but everything I have found is about minimizing latency. This is irrelevant to me, all I need is an (at run-time) known latency.
On Linux I solve this with snd_pcm_delay() with very good results, but I'm looking for a decent solution for Windows.
I have looked at the following:
With OpenAL I have measured delays at 80ms that are unaccounted for. I assume this isn't a hardcoded value and I haven't found any API to read the latency. There are some extensions to OpenAL that claims to support this but from what I can tell it's only implemented on Linux.
Wasapi has GetStreamLatency() which sounds like the real deal but this is apparently only some thread polling interval or something so it's also useless. I still have 30ms unaccounted delay on my machine.
DirectSound has no API for getting latency? But can I get close enough by just keeping track of my output buffers?
Edit in response to Brad's comment:
My impression of ASIO is that it is primarily targeted for professional audio applications and audio connoiseurs, and that the user might have to install special sound card drivers and I will have to deal with licensing. Feature-wise it seems like a good option though.
The setup
The game in question is using CoreAudio and single AudioGraph to play sounds.
The graph looks like this:
input callbacks -> 3DMixer -> DefaultOutputDevice
3DMixer's BusCount is set to 50 sounds max.
All samples are converted to default output device's stream format before being fed to input callbacks. Unused callbacks aren't set (NULL). Most sounds are 3D, so azimuth, pan, distance and gain are usually set for each mixer input, not left alone. They're checked to make sure only valid values are set. Mixer input's playback rate is also sometimes modified slightly to simulate pitch, but for most sounds it's kept at default setting.
The problem
Let's say I run the game and start a level populated with many sounds, lot of action.
I'm running HALLab -> IO Cycle Telemetry window to see how much time it takes to process each sound cycle - it doesn't take any more than 4ms out of over 10 available ms in each cycle, and I can't spot a single peak that would make it go over alloted time.
At some point when playing the game, when many sounds are playing at the same time (less than 50, but not less than 20), I hear a poof, and from then on only silence. No Mac sounds can be generated from any application on Mac. The IO Telemetry window shows my audio ticks still running, still taking time, still providing samples to output device.
This state persists even if there are less, then no sounds playing in my game.
Even if I quit the game entirely, Mac sounds generated by other applications don't come back.
Putting Mac to sleep and waking it up doesn't help anything either.
Only rebooting it fully results in sounds coming back. After it's back, first few sounds have crackling in them.
What can I do to avoid the problem? The game is big, complicated, and I can't modify what's playing - but it doesn't seem to overload the IO thread, so I'm not sure if I should. The problem can't be caused by any specific sound data, because all sound samples are played many times before the problem occurs. I'd think any sound going to physical speakers would be screened to avoid overloading them physically, and the sound doesn't have to be loud at all to cause the bug.
I'm out of ideas.