ffmpeg not honoring sample rate in opus output - ffmpeg

I am capturing a live audio stream to Opus, and no matter what I choose for the audio sample rate, I get 48khz output.
This is my command line
./ffmpeg -f alsa -ar 16000 -i sysdefault:CARD=CODEC -f
alsa -ar 16000 -i sysdefault:CARD=CODEC_1 -filter_complex
join=inputs=2:channel_layout=stereo:map=0.1-FR\|1.0-FL,asetpts=expr=N/SR/TB
-ar 16000 -ab 64k -c:a opus -vbr off -compression_level 5 output.ogg
And this is what ffmpeg responds with:
Output #0, ogg, to 'output.ogg': Metadata:
encoder : Lavf57.48.100
Stream #0:0: Audio: opus (libopus), 16000 Hz, stereo, s16, delay 104, padding 0, 64 kb/s (default)
Metadata:
encoder : Lavc57.54.100 libopus
However, it appears that ffmpeg has lied, because when analysing the file again, I get:
Input #0, ogg, from 'output.ogg': Duration: 00:00:03.21, start:
0.000000, bitrate: 89 kb/s
Stream #0:0: Audio: opus, 48000 Hz, stereo, s16, delay 156, padding 0
Metadata:
ENCODER : Lavc57.54.100 libopus
I have tried so many permutations of sample rate, simplifying down to a single audio input etc etc - always with the same result.
Any ideas?

This question should be asked and answered on Super User, since it's about using software instead of programming. But, since I know the answer, I'll post one anyway.
FFmpeg will encode Opus at the sample rate specified. You can verify this in the source code of libopusenc.c (here and here).
But FFmpeg will decode Opus at 48 kHz, even if it was encoded at a lower sample rate. You can verify this in libopusdec.c (here and here).
This is actually recommended by the Ogg Opus specification (IETF RFC 7845). Section 5.1, item 5 says:
An Ogg Opus player SHOULD select the playback sample rate according to the following procedure:
If the hardware supports 48 kHz playback, decode at 48 kHz.
Otherwise, if the hardware's highest available sample rate is a supported rate, decode at this sample rate.
Otherwise, if the hardware's highest available sample rate is less than 48 kHz, decode at the next higher Opus supported rate above the highest available hardware rate and resample.
Otherwise, decode at 48 kHz and resample.
Since FFmpeg and most hardware support 48 kHz playback, 48 kHz is used for decoding Opus in FFmpeg. The original sample rate is stored in the OpusHead packet of the Ogg container, so you can retrieve it using a parser or different player if you wish, but FFmpeg ignores it and just decodes at 48 kHz.

Related

Concatenating audio files with ffmpeg results in a wrong total duration

With "wrong total duration" I mean a total duration different from the sum of individual duration of audio files.
sum_duration_files != duration( concatenation of files )
In particular I am concatenating 2 OGG audio files with this command
ffmpeg -safe 0 -loglevel quiet \
-f concat -segment_time_metadata 1 -i {m3u_file_name} \
-vf select=concatdec_select \
-af aselect=concatdec_select,aresample=async=1 \
{ogg_file_name}
And I get the following
# Output of: ffprobe <FILE>.ogg
======== files_in
Input #0, ogg, from 'f1.ogg':
Duration: 00:00:04.32, start: 0.000000, bitrate: 28 kb/s
Stream #0:0: Audio: opus, 48000 Hz, mono, fltp
Input #0, ogg, from 'f2.ogg':
Duration: 00:00:00.70, start: 0.000000, bitrate: 68 kb/s
Stream #0:0: Audio: vorbis, 44100 Hz, mono, fltp, 160 kb/s
Metadata:
ENCODER : Lavc57.107.100 libvorbis
Note durations: 4.32 and 0.7 sec
And this is the output file.
========== files out (concatenate of files_in)
Input #0, ogg, from 'f_concat_v1.ogg':
Duration: 00:00:04.61, start: 0.000000, bitrate: 61 kb/s
Stream #0:0: Audio: vorbis, 48000 Hz, mono, fltp, 80 kb/s
Metadata:
ENCODER : Lavc57.107.100 libvorbis
Duration: 4.61 sec
As 4.61 sec != 4.32 + 0.7 sec I have a problem.
The issue here is using a wrong concatenation approach for these files. As FFmpeg wiki article suggests, file-level concatenation (-f concat) requires all files in the listing to have the exact same codec parameters. In your case, only # of channels (mono) and sample format (flt) are common between them. On the other hand, codec (opus vs. vorbis) and sampling rate (48000 vs. 44100) are different.
-f concat grabs the first set of parameters and runs with it. In your case, it uses 48000 S/s for all the files. Although the second file is 44100 S/s, it assumes 48k (so it'll play it faster than it is). I don't know how the difference in the codec played out in the output.
So, a standard approach is to use -filter_complex concat=a=1:v=1:n=2 with these files given as separate inputs.
Out of curiosity, have you listen to the wrong-duration output file? [edit: never mind, your self-answer indicates one of them is a silent track]
I don't know WHY it happens, but I know how to avoid the problem in my particular case.
My case:
I am mixing (concatenating) different audio files generated by one single source with silence files generated by me.
Initially I generated the silence files with
# x is a float from python
ffmpeg -f lavfi -i anullsrc=r=44100:cl=mono -t {x:2.1f} -q:a 9 -acodec libvorbis silence-{x:2.1f}.ogg
Trying to resolve the issue I re-created those silences with the SAME parameters than the audios I was mixing with, that is (mono at 48Khz):
ffmpeg -f lavfi -i anullsrc=r=48000:cl=mono -t {x:2.1f} -c:a libvorbis silence-{x:2.1f}.ogg
And now ffprobe shows the expected result.
========== files out (concatenate of files_in)
Input #0, ogg, from 'f_concat_v2.ogg':
Duration: 00:00:05.02, start: 0.000000, bitrate: 56 kb/s
Stream #0:0: Audio: vorbis, 48000 Hz, mono, fltp, 80 kb/s
Metadata:
ENCODER : Lavc57.107.100 libvorbis
Duration: 5.02 = 4.32 + 0.70
If you want to avoid problems when concatenating silence with other sounds, do create the silence with the SAME parameters than the sound you will mix with (mono/stereo and Hz)
==== Update 2022-03-08
Using the info provided by #kesh I have recreated the silent ogg files using
ffmpeg -f lavfi -i anullsrc=r=48000:cl=mono -t 5.8 -c:a libopus silence-5.8.ogg
And now the
ffmpeg -safe 0 -f concat -segment_time_metadata 1
-i {m3u_file_name}
-vf select=concatdec_select
-af aselect=concatdec_select,aresample=async=1 {ogg_file_name}
Doesn't throw this error anymore (multiple times).
[opus # 0x558b2c245400] Error parsing the packet header.
Error while decoding stream #0:0: Invalid data found when processing input
I must say that the error was not creating (for me) any problem, because the output was what I expected, but now I feel better without it.

Specifying input duration of .aac with ffmpeg

I had an error with an mp4 recording, and after recovering the video and audio streams I've still got an issue. The aac audio file is 160kb/s CBR. However, ffmpeg returns this when trying to work with it;
[aac # 000001187e6944c0] Estimating duration from bitrate, this may be inaccurate
Input #0, aac, from 'result.aac':
Duration: 00:38:41.01, bitrate: 174 kb/s
Stream #0:0: Audio: aac (LC), 44100 Hz, stereo, fltp, 174 kb/s
That duration and bitrate is totally wrong. It should be ~42 minutes long, and it definitely has a bitrate of 160 kb/s.
This results in the audio being very inconsistently timed, as well as having all sorts of other issues. It's very weird.
Is there any way I can specify that the input is 160 cbr to try and wrangle it back into a usable file?

What is the difference of duration bitrate and stream bitrate in ffmpeg/ffprobe?

Why ffmpeg / ffprobe gives different bitrate values for the stream and for file as whole?
When I analyze an mp3 file with ffprobe, it gives different bitrates on first and second lines.
Does anyone know, what is the difference?
// File 1, there is problem
Duration: 02:05:47.04, start: 0.000000, bitrate: 193 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 192 kb/s
// File 2, no problem
Duration: 02:05:51.05, start: 0.000000, bitrate: 192 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 192 kb/s
(I need to get correct information about files because I process these files for fingerprinting)
If you want the actual bitrate of the audio stream, you'll need to parse it.
ffmpeg -i file -c copy -map 0:a -f null -
Note down the audio stream size on the last line, e.g. audio:8624kB and the duration on the line above it, e.g. time=00:03:43.16. Divide the first by the second to get the average bitrate of the stream.
If you want the notional bitrate i.e. the target set for the encoder, then it's the reading for the Stream.
The format bitrate i.e. the one next to start:, is crude, and simply divides the filesize by the duration. But this includes all streams and headers. Useful for a file with single video + single audio, but not for others.

How do I get audio files of a specific file size?

Is there any way to use ffmpeg to accurately break audio files into smaller files of a specific file size, or pull a specific number of samples from a file?
I'm working with a speech-to-text API that needs audio chunks in exactly 160,000 bytes, or 80,000 16-bit samples.
I have a video stream, and I have an ffmpeg command to extract audio from it:
ffmpeg -i "rtmp://MyFMSWorkspace/ingest/test/mp4:test_1000 live=1" -ar 16000 -f segment -segment_time 10 out%04d.wav
So now I have ~10 second audio chunks with a sample rate of 16 kHz. Is there any way to break this into exactly 160kb, 5 second files using ffmpeg?
I tried this:
ffmpeg -t 00:00:05.00 -i out0000.wav outCropped.wav
But the output was this:
Input #0, wav, from 'out0000.wav':
Metadata:
encoder : Lavf56.40.101
Duration: 00:00:10.00, bitrate: 256 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 16000 Hz, 1 channels, s16, 256 kb/s
Output #0, wav, to 'outCropped.wav':
Metadata:
ISFT : Lavf56.40.101
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 16000 Hz, mono, s16, 256 kb/s
Metadata:
encoder : Lavc56.60.100 pcm_s16le
Stream mapping:
Stream #0:0 -> #0:0 (pcm_s16le (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
size= 156kB time=00:00:05.00 bitrate= 256.1kbits/s
but now the size is 156kb
EDIT:
My finished command is:
ffmpeg -i "url" -map 0:1 -af aresample=16000,asetnsamples=16000 -f segment -segment_time 5 -segment_format sw out%04d.sw
That output looks perfectly right. That ffmpeg size is expressed in KiB although it says kB. 160000 bytes = 156.25 kB + some header data. ffmpeg shows size with fractional part hidden. If you want a raw file, with no headers, output to .raw instead of .wav.
For people converting video files to MP3s split into 30 minute segments:
ffmpeg -i "something.MP4" -q:a 0 -map a -f segment -segment_time 1800 FileNumber%04d.mp3
The -q option can only be used with libmp3lame and corresponds to the LAME -V option (source)

Extract audio from Audio wrapped into video stream ffmpeg/ffmbc

I have a mov file :
Metadata:
timecode: 09:59:50:00
Duration: 00:00:30.00, bitrate: 117714 kb/s
Stream #0.0(eng): Video: dvvideo, yuv422p, 1440x1080i tff [PAR 4:3 DAR 16:9]
, 115200 kb/s, 25.00 fps
Metadata:
codec_name: DVCPRO HD 1080i50
Stream #0.1(eng): Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s
Stream #0.2(eng): Data: unknown (tmcd)
I can see from MediaInfo
That the Audio is Muxed into the video. I'm trying to re-wrap this into an XDCAM, and copy over the audio streams. The problem is that I don't know how to map the audio that is wrapped into the video?
This is the command I have so far:
ffmbc -threads 8 -i "input.mov" -threads 8 -tff
-pix_fmt yuv422p -vcodec mpeg2video -timecode 09:59:50:00
.. other tags omitted ..
-acodec pcm_s24le
-map_audio_channel 0.1:0-0.1:0
-map_audio_channel 0.1:1-0.1:1
-f mov -y "output.mov"
-acodec pcm_s24le
-map_audio_channel 0.2:0-0.2:0
-map_audio_channel 0.2:1-0.2:1 -newaudio
When executed this returns "Cannot find audio channel 0.2.0". I changed the input stream identifier to stream 0, and 1 for the audios. Which when executed returned "Cannot find audio channel #0.0.0" presumably because it's trying to find a audio channel within the video stream?
How can I extract the audio from this file?
You may notice I'm using FFMBC, not FFMPEG ( there is no tag for FFMBC ), but I imagine it's the same for both. I'm not constrained to FFMBC, I can move to FFMPEG if it has a solution.
Thanks

Resources