How to extract EPG data from a rec/ts file? - ffmpeg

I need to extract the data stream of a rec/ts file.
What I've tried until now is with avconv
avconv -i filename.rec
I get this output
avconv version 0.8.17-6:0.8.17-1, Copyright (c) 2000-2014 the Libav developers
built on Mar 15 2015 17:00:31 with gcc 4.7.2
...
Input #0, mpegts, from 'filename.rec':
Duration: 01:54:55.94, start: 74083.801633, bitrate: 400 kb/s
...
Program 28479
Metadata:
...
Stream #0.0[0x475](ger): Audio: mp2, 48000 Hz, stereo, s16, 320 kb/s
Stream #0.1[0x81a]: Data: [5][0][0][0] / 0x0005
Stream #0.2[0x881]: Data: [11][0][0][0] / 0x000B
...
AFIK the data stream contains the EPG information. Does it?
The following command
avconv -i filename.rec -f ffmetadata metadata.txt
outputs this to metadata.txt
;FFMETADATA1
and with
avconv -i filename.rec -map 0:1 -f ffmetadata metadata.txt
I get the message Data stream encoding not supported yet (only streamcopy)
The file filename.rechas following content which I would like to extract:

Related

Concatenating audio files with ffmpeg results in a wrong total duration

With "wrong total duration" I mean a total duration different from the sum of individual duration of audio files.
sum_duration_files != duration( concatenation of files )
In particular I am concatenating 2 OGG audio files with this command
ffmpeg -safe 0 -loglevel quiet \
-f concat -segment_time_metadata 1 -i {m3u_file_name} \
-vf select=concatdec_select \
-af aselect=concatdec_select,aresample=async=1 \
{ogg_file_name}
And I get the following
# Output of: ffprobe <FILE>.ogg
======== files_in
Input #0, ogg, from 'f1.ogg':
Duration: 00:00:04.32, start: 0.000000, bitrate: 28 kb/s
Stream #0:0: Audio: opus, 48000 Hz, mono, fltp
Input #0, ogg, from 'f2.ogg':
Duration: 00:00:00.70, start: 0.000000, bitrate: 68 kb/s
Stream #0:0: Audio: vorbis, 44100 Hz, mono, fltp, 160 kb/s
Metadata:
ENCODER : Lavc57.107.100 libvorbis
Note durations: 4.32 and 0.7 sec
And this is the output file.
========== files out (concatenate of files_in)
Input #0, ogg, from 'f_concat_v1.ogg':
Duration: 00:00:04.61, start: 0.000000, bitrate: 61 kb/s
Stream #0:0: Audio: vorbis, 48000 Hz, mono, fltp, 80 kb/s
Metadata:
ENCODER : Lavc57.107.100 libvorbis
Duration: 4.61 sec
As 4.61 sec != 4.32 + 0.7 sec I have a problem.
The issue here is using a wrong concatenation approach for these files. As FFmpeg wiki article suggests, file-level concatenation (-f concat) requires all files in the listing to have the exact same codec parameters. In your case, only # of channels (mono) and sample format (flt) are common between them. On the other hand, codec (opus vs. vorbis) and sampling rate (48000 vs. 44100) are different.
-f concat grabs the first set of parameters and runs with it. In your case, it uses 48000 S/s for all the files. Although the second file is 44100 S/s, it assumes 48k (so it'll play it faster than it is). I don't know how the difference in the codec played out in the output.
So, a standard approach is to use -filter_complex concat=a=1:v=1:n=2 with these files given as separate inputs.
Out of curiosity, have you listen to the wrong-duration output file? [edit: never mind, your self-answer indicates one of them is a silent track]
I don't know WHY it happens, but I know how to avoid the problem in my particular case.
My case:
I am mixing (concatenating) different audio files generated by one single source with silence files generated by me.
Initially I generated the silence files with
# x is a float from python
ffmpeg -f lavfi -i anullsrc=r=44100:cl=mono -t {x:2.1f} -q:a 9 -acodec libvorbis silence-{x:2.1f}.ogg
Trying to resolve the issue I re-created those silences with the SAME parameters than the audios I was mixing with, that is (mono at 48Khz):
ffmpeg -f lavfi -i anullsrc=r=48000:cl=mono -t {x:2.1f} -c:a libvorbis silence-{x:2.1f}.ogg
And now ffprobe shows the expected result.
========== files out (concatenate of files_in)
Input #0, ogg, from 'f_concat_v2.ogg':
Duration: 00:00:05.02, start: 0.000000, bitrate: 56 kb/s
Stream #0:0: Audio: vorbis, 48000 Hz, mono, fltp, 80 kb/s
Metadata:
ENCODER : Lavc57.107.100 libvorbis
Duration: 5.02 = 4.32 + 0.70
If you want to avoid problems when concatenating silence with other sounds, do create the silence with the SAME parameters than the sound you will mix with (mono/stereo and Hz)
==== Update 2022-03-08
Using the info provided by #kesh I have recreated the silent ogg files using
ffmpeg -f lavfi -i anullsrc=r=48000:cl=mono -t 5.8 -c:a libopus silence-5.8.ogg
And now the
ffmpeg -safe 0 -f concat -segment_time_metadata 1
-i {m3u_file_name}
-vf select=concatdec_select
-af aselect=concatdec_select,aresample=async=1 {ogg_file_name}
Doesn't throw this error anymore (multiple times).
[opus # 0x558b2c245400] Error parsing the packet header.
Error while decoding stream #0:0: Invalid data found when processing input
I must say that the error was not creating (for me) any problem, because the output was what I expected, but now I feel better without it.

ffmpeg how add header info into pcm?

I use this cmd convert s16le to pcmu8, but will lost header info.
ffmpeg -i s16le.wav -f u8 pcmu8.wav
ffmpeg -i pcmu8.wav
# pcmu8.wav: Invalid data found when processing input
I want known, how add this header info into pcmu8.wav?
It should be this:
ffmpeg -i pcmu8.wav
#Input #0, wav, from 'pcmu8.wav':
# Duration: 00:13:39.20, bitrate: 64 kb/s
# Stream #0:0: Audio: pcm_u8 ([1][0][0][0] / 0x0001), 8000 Hz, mono, u8, 64 kb/s
Your first command is outputting to a raw bitstream, not a WAV, so adding a header won't help. Instead use
ffmpeg -i s16le.wav -c:a pcm_u8 pcmu8.wav

ffmpeg doesn't seem to be working with multiple audio streams correctly

I'm having an issue with ffmpeg 3.2.2; ordinarily I ask it to make an MP4 video file with 2 audio streams. The command line looks like this:
ffmpeg.exe
-rtbufsize 256M
-f dshow -i video="screen-capture-recorder" -thread_queue_size 512
-f dshow -i audio="Line 2 (Virtual Audio Cable)"
-f dshow -i audio="Line 3 (Virtual Audio Cable)"
-map 0:v -map 1:a -map 2:a
-af silencedetect=n=-50dB:d=60 -pix_fmt yuv420p -y "c:\temp\2channelvideo.mp4"
I've wrapped it for legibility. This once worked fine, but something is wrong lately - it doesnt seem to record any audio, even though I can use other tools like Audacity to record audio from these devices just fine
I'm trying to do some diag on it by dropping the video component and asking ffmpeg to record the two audio devices to two separate files:
ffmpeg.exe
-f dshow -i audio="Line 2 (Virtual Audio Cable)" "c:\temp\line2.mp3"
-f dshow -i audio="Line 3 (Virtual Audio Cable)" "c:\temp\line3.mp3"
ffmpeg's console output looks like:
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, dshow, from 'audio=Line 2 (Virtual Audio Cable)':
Duration: N/A, start: 5935.810000, bitrate: 1411 kb/s
Stream #0:0: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, dshow, from 'audio=Line 3 (Virtual Audio Cable)':
Duration: N/A, start: 5936.329000, bitrate: 1411 kb/s
Stream #1:0: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s
Output #0, mp3, to 'c:\temp\line2.mp3':
Metadata:
TSSE : Lavf57.56.100
Stream #0:0: Audio: mp3 (libmp3lame), 44100 Hz, stereo, s16p
Metadata:
encoder : Lavc57.64.101 libmp3lame
Output #1, mp3, to 'c:\temp\line3.mp3':
Metadata:
TSSE : Lavf57.56.100
Stream #1:0: Audio: mp3 (libmp3lame), 44100 Hz, stereo, s16p
Metadata:
encoder : Lavc57.64.101 libmp3lame
Stream mapping:
Stream #0:0 -> #0:0 (pcm_s16le (native) -> mp3 (libmp3lame))
Stream #0:0 -> #1:0 (pcm_s16le (native) -> mp3 (libmp3lame))
Press [q] to stop, [?] for help
The problem i'm currently having is that the produced mp3 are identical copies of line 2 only; line 3 audio is not recorded. The last line is of concern; it seems to be saying that stream 0 is being mapped to both output 0 and 1? Do I need a map command for each file also? I thought it would be implicit due to the way i specified the arguments
Turned out I needed to add a -map x:a between each source and output file, where x was either 0 or 1 depending on if it was the first or second source..

Using ffmpeg native aac codec, but metadata says libvo_aacenc, and faststart not supported?

I'm using ffmpeg and am trying to switch from using the 'libvo_aacenc' encoder to the native aac encoder. It seems to work, but the metadata in the output seems to indicate that it's still using the old encoder.
I changed the audio portion of my ffmpeg call from
-i out.wav -acodec libvo_aacenc
to
-i out.wav -acodec aac -strict experimental
But the output includes this:
Metadata:
encoder : Lavf53.21.1
Stream #0.0: Video: libx264, yuv420p, 432x256, q=-1--1, 30 tbn, 30 tbc
Stream #0.1: Audio: libvo_aacenc, 44100 Hz, 1 channels, s16, 200 kb/s
I don't understand where it is still getting the 'libvo_aacenc' from?
Another problem, maybe unrelated, is that when I try to add the "-movflags +faststart" option to my call, I get errors:
[mp4 muxer # 0x49ad520] [Eval # 0x3e59d37c6b0] Undefined constant or missing '(' in 'faststart'
[mp4 muxer # 0x49ad520] Unable to parse option value "faststart"
[mp4 muxer # 0x49ad520] Error setting option movflags to value +faststart.
From looking online it would appear my ffmpeg version is old, pre faststart, but my ffmpeg version is 0.8.17-4:0.8.17-0ubuntu0.12.04.1, Copyright (c) 2000-2014 the Libav developers built on Mar 16 2015 13:26:50 with gcc 4.6.3
That seems like it should include faststart, which was introduced in 2013, right?
Any ideas what could be going on?
Thanks very much,
Bob

How do I get audio files of a specific file size?

Is there any way to use ffmpeg to accurately break audio files into smaller files of a specific file size, or pull a specific number of samples from a file?
I'm working with a speech-to-text API that needs audio chunks in exactly 160,000 bytes, or 80,000 16-bit samples.
I have a video stream, and I have an ffmpeg command to extract audio from it:
ffmpeg -i "rtmp://MyFMSWorkspace/ingest/test/mp4:test_1000 live=1" -ar 16000 -f segment -segment_time 10 out%04d.wav
So now I have ~10 second audio chunks with a sample rate of 16 kHz. Is there any way to break this into exactly 160kb, 5 second files using ffmpeg?
I tried this:
ffmpeg -t 00:00:05.00 -i out0000.wav outCropped.wav
But the output was this:
Input #0, wav, from 'out0000.wav':
Metadata:
encoder : Lavf56.40.101
Duration: 00:00:10.00, bitrate: 256 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 16000 Hz, 1 channels, s16, 256 kb/s
Output #0, wav, to 'outCropped.wav':
Metadata:
ISFT : Lavf56.40.101
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 16000 Hz, mono, s16, 256 kb/s
Metadata:
encoder : Lavc56.60.100 pcm_s16le
Stream mapping:
Stream #0:0 -> #0:0 (pcm_s16le (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
size= 156kB time=00:00:05.00 bitrate= 256.1kbits/s
but now the size is 156kb
EDIT:
My finished command is:
ffmpeg -i "url" -map 0:1 -af aresample=16000,asetnsamples=16000 -f segment -segment_time 5 -segment_format sw out%04d.sw
That output looks perfectly right. That ffmpeg size is expressed in KiB although it says kB. 160000 bytes = 156.25 kB + some header data. ffmpeg shows size with fractional part hidden. If you want a raw file, with no headers, output to .raw instead of .wav.
For people converting video files to MP3s split into 30 minute segments:
ffmpeg -i "something.MP4" -q:a 0 -map a -f segment -segment_time 1800 FileNumber%04d.mp3
The -q option can only be used with libmp3lame and corresponds to the LAME -V option (source)

Resources