I'm working with multichannel audio files (higher-order ambisonics), that typically have at least 16 channels.
Sometimes I'm only interested in a subset of the audiochannels (e.g. the first 25 channels of a file that contains even more channels).
For this I have a script like the following, that takes a multichannel input file, an output file and the number of channels I want to extract:
#!/bin/sh
infile=$1
outfile=$2
channels=$3
channelmap=$(seq -s"|" 0 $((channels-1)))
ffmpeg -y -hide_banner \
-i "${infile}" \
-filter_complex "[0:a]channelmap=${channelmap}" \
-c:a libopus -mapping_family 255 -b:a 160k -sample_fmt s16 -vn -f webm -dash 1 \
"${outfile}"
The actual channel extraction is done via the channelmap filter, that is invoked with something like -filter:complex "[0:a]channelmap=0|1|2|3"
This works great with 1,2,4 or 16 channels.
However, it fails with 9 channels, and 25 and 17 (and generally anything with >>16 channels).
The error I get is:
$ ffmpeg -y -hide_banner -i input.wav -filter_complex "[0:a]channelmap=0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16" -c:a libopus -mapping_family 255 -b:a 160k -sample_fmt s16 -vn -f webm -dash 1 output.webm
Input #0, wav, from 'input.wav':
Duration: 00:00:09.99, bitrate: 17649 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 25 channels, s16, 17640 kb/s
[Parsed_channelmap_0 # 0x5568874ffbc0] Output channel layout is not set and cannot be guessed from the maps.
[AVFilterGraph # 0x5568874fff40] Error initializing filter 'channelmap' with args '0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16'
Error initializing complex filters.
Invalid argument
So ffmpeg cannot guess the channel layout for a 17 channel file.
ffmpeg -layouts only lists channel layouts with 1,2,3,4,5,6,7,8 & 16.
However, I really don't care about the channel layout. The entire concept of "channel layout" is centered around the idea, that each audio channel should go to a different speaker.
But my audio channels are not speaker feeds at all.
So I tried providing explicit channel layouts, with something like -filter_complex "[0:a]channelmap=map=0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16:channel_layout=unknown", but this results in an error when parsing the channel layout:
$ ffmpeg -y -hide_banner -i input.wav -filter_complex "[0:a]channelmap=map=0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16:channel_layout=unknown" -c:a libopus -mapping_family 255 -b:a 160k -sample_fmt s16 -vn -f webm -dash 1 output.webm
Input #0, wav, from 'input.wav':
Duration: 00:00:09.99, bitrate: 17649 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 25 channels, s16, 17640 kb/s
[Parsed_channelmap_0 # 0x55b60492bf80] Error parsing channel layout: 'unknown'.
[AVFilterGraph # 0x55b604916d00] Error initializing filter 'channelmap' with args 'map=0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16:channel_layout=unknown'
Error initializing complex filters.
Invalid argument
I also tried values like any, all, none, 0x0 and 0xFF with the same result.
I tried using mono (as the channels are kind-of independent), but ffmpeg is trying to be clever and tells me that a mono layout must not have 17 channels.
I know that ffmpeg can handle multi-channel files without a layout.
E.g. converting a 25-channel file without the -filter_complex "..." works without problems, and ffprobe gives me an unknown channel layout.
So: how do I tell ffmpeg to just not care about the channel_layout when creating an output file that only contains a subset of the input channels?
Based on Audio Channel Manipulation you could try splitting into n separate streams the amerge them back together:
-filter_complex "\
[0:a]pan=mono|c0=c0[a0];\
[0:a]pan=mono|c0=c1[a1];\
[0:a]pan=mono|c0=c2[a2];\
[0:a]pan=mono|c0=c3[a3];\
[0:a]pan=mono|c0=c4[a4];\
[0:a]pan=mono|c0=c5[a5];\
[0:a]pan=mono|c0=c6[a6];\
[0:a]pan=mono|c0=c7[a7];\
[0:a]pan=mono|c0=c8[a8];\
[0:a]pan=mono|c0=c9[a9];\
[0:a]pan=mono|c0=c10[a10];\
[0:a]pan=mono|c0=c11[a11];\
[0:a]pan=mono|c0=c12[a12];\
[0:a]pan=mono|c0=c13[a13];\
[0:a]pan=mono|c0=c14[a14];\
[0:a]pan=mono|c0=c15[a15];\
[0:a]pan=mono|c0=c16[a16];\
[a0][a1][a2][a3][a4][a5][a6][a7][a8][a9][a10][a11][a12][a13][a14][a15][a16]amerge=inputs=17"
Building on the answer from #aergistal, and working with an mxf file with 10 audio streams, I had to modify the filter in order to specify the input to every pan filter. Working with "pan=mono" it only uses one channel identified as c0
-filter_complex "\
[0:a:0]pan=mono|c0=c0[a0];\
[0:a:1]pan=mono|c0=c0[a1];\
[0:a:2]pan=mono|c0=c0[a2];\
[0:a:3]pan=mono|c0=c0[a3];\
[0:a:4]pan=mono|c0=c0[a4];\
[0:a:5]pan=mono|c0=c0[a5];\
[0:a:6]pan=mono|c0=c0[a6];\
[0:a:7]pan=mono|c0=c0[a7];\
[0:a:8]pan=mono|c0=c0[a8];\
[0:a:9]pan=mono|c0=c0[a9];\
[a0][a1][a2][a3][a4][a5][a6][a7][a8][a9]amerge=inputs=10"
Related
I want to convert raw audio(binary) to audio file(mp3, wav etc) with same audio info as originals'.
Here's video(mp4) file that has audio stream's, and following is the audio stream info pulled out from ffmpeg.
Stream #0:1(eng): Audio: adpcm_ima_wav (ms[0][17] / 0x1100736D), 32000 Hz, 2 channels, s16p, 256 kb/s (default)
I used,
ffmpeg.exe -f s16le -ar 32000 -ac 1 -i raw_audio.raw -acodec copy output.wav
Seems converting process is finished okay, but the problem is, if I listen the output.wav, there's the big noise from output wav file. Also, it's not the same audio from original video.
I tried specifying "adpcm_ima_wav" codec with "-f" switch, but it doesn't work.
Any suggenstion please?
by the way I know how to extract audio from video with ffmpeg, I just want to convert RAW audio binary data to .WAV or .MP3
(ffmpeg.exe -f test.mp4 -map 0:a:0 audio.mp3)
If I have a video file with 1 video streams, 2 DTS audio streams, and 2 subtitle streams, can I convert a DTS stream to ac3 and mux it into a file with a single command?
Currently I used a command like this (stream 0:1 is DTS-HD) to extract the audio and convert it to AC3, then I have to manually mux it back in using -map. Is there a way to cut out that 2nd command and just convert and mux in the new stream to a new file?
ffmpeg -y -i "media.mkv" -map 0:1 -c:a ac3 -b:a 640k newmedia.mkv
ALSO: The DTS streams are 5.1 surround sound. Do I have to do anything special to preserve those channels, or will they automatically convert over?
Use
ffmpeg -y -i "media.mkv" -map 0 -c copy -c:a:0 ac3 -b:a:0 640k newmedia.mkv.
In the command above, the first output audio stream is encoded to AC3, with a bitrate set for it. All other streams are copied.
If the encoder supports the channel count and layout then they will be preserved. AC3 does, IIRC.
Not sure if it's a gap or just misaligned audio samples, but when i split an audio file in two, like this:
ffmpeg -ss 0 -t 00:00:15.00 -i song.mp3 seg1.mp3
and
ffmpeg -ss 00:00:15.00 -t 15 -i song.mp3 seg2.mp3
and then combine them again with concat filter:
ffmpeg -i 'concat:seg1.mp3|seg2.mp3' out.mp3
There is a distinct "pop" between the segments. How can i make this seamless?
I see this on seg2.mp3:
Duration: 00:00:15.05, start: 0.025057, bitrate: 128 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p, 128 kb/s
Why is "start" not 0? That could be the gap.
If you want to eliminate the gap I recommend using the atrim and concat filters:
ffmpeg -i input -filter_complex \
"[0:a]atrim=end=15[a0]; \
[0:a]atrim=start=15:end=30[a1]; \
[a0][a1]concat=n=2:v=0:a=1" \
output.mp3
Note that MP3 files may have silence/delay at the beginning and end, so using individually encoded segments is not ideal.
I am trying to combine two video files (of size 320x240) and create a single, horizontally extended output video file (of size 640x240) but for the audio merging, the command fails when one of the input files does not contain audio stream.
Here's the command I am using:
C:\ffmpeg\bin\ffmpeg.exe -y -i "input1.flv" -i "input2.flv" -filter_complex "nullsrc=size=640x240[base];[0:v]scale=320x240[upperleft];[1:v]scale=320x240[upperright];[base][upperleft]overlay=shortest=1[tmp1];[tmp1][upperright]overlay=shortest=1:x=320:y=0;[0:a][1:a]amerge=inputs=2[aout]" -map [aout] -ac 2 "output.mp4"
This command works fine when both input1.flv and input2.flv contain audio tracks. When either one lacks an audio track, the command gives the following error:
[flv # 0000000004300660] Stream discovered after head already parsed
[flv # 0000000004300660] Could not find codec parameters for stream 1
(Audio: none, 0 channels): unspecified sample format Consider
increasing the value for the 'analyzeduration' and 'probesize' options
Input #1, flv, from 'input2.flv': Metadata:
creationdate : Tue Jan 26 16:50:12 Duration: 00:25:59.10, start: 0.000000, bitrate: 212 kb/s
Stream #1:0: Video: flv1, yuv420p, 320x240, 1k tbr, 1k tbn, 1k tbc
Stream #1:1: Audio: none, 0 channels
Stream #1:2: Data: none [abuffer # 0000000004335620] Value inf for parameter 'time_base' out of range [0 - 2.14748e+009] [abuffer #
0000000004335620] Unable to parse option value "(null)" as sample
format [abuffer # 0000000004335620] Value inf for parameter
'time_base' out of range [0 - 2.14748e+009] [abuffer #
0000000004335620] Error setting option time_base to value 1/0. [graph
0 input from stream 1:1 # 00000000042e4d60] Error applying options to
the filter. Error configuring filters.
Is there a way to make this command work even when one audio stream lacks an audio track or both of the audio streams lack audio tracks?
There's one preparatory command that needs to be executed only once,to generate a file
ffmpeg -f lavfi -t 1 -i anullsrc:r=48000 silence.mkv
For each flv,
ffmpeg -i input1.flv -analyzeduration 10M -i silence.mkv -c copy -map 0 -map 1 input1a.mkv
ffmpeg -i input2.flv -analyzeduration 10M -i silence.mkv -c copy -map 0 -map 1 input2a.mkv
And then,
ffmpeg -i input1a.mkv -i input2a.mkv -filter_complex "[0:v][1:v]hstack=shortest=1[v];[0:a][1:a]amerge[a]" -map [v] -map [a] -ac 2 "output.mp4"
I am using this command to convert an avi,mov,m4v video files to flv format via FFMPEG
/usr/local/bin/ffmpeg -i '/home/public_html/files/video_1355440448.m4v' -s '640x360' -sameq -ab '64k' -ar '44100' -f 'flv' -y /home/public_html/files/video_1355440448.flv
[flv # 0x68b1a80] requested bitrate is too low
Output #0, flv, to '/home/files/1355472099-50cadce349290.flv':
Stream #0.0: Video: flv, yuv420p, 640x360, q=2-31, pass 2, 200 kb/s, 90k tbn, 25 tbc
Stream #0.1: Audio: adpcm_swf, 44100 Hz, 2 channels, s16, 64 kb/s
Stream mapping:
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1
Error while opening encoder for output stream #0.0 - maybe incorrect parameters such as bit_rate, rate, width or height
-------------------------------
RESULT
-------------------------------
Execute error. Output for file "/home/public_html/files/video_1355472099.avi" was found, but the file contained no data. Please check the available codecs compiled with FFmpeg can support this type of conversion. You can check the encode decode availability by inspecting the output array from PHPVideoToolkit::getFFmpegInfo().
But if I manually used this command then its working
/usr/local/bin/ffmpeg -i '/home/public_html/files/video_1355440448.m4v' -s '640x360' -sameq -ab '64k' -ar '44100' -f 'flv' -y /home/public_html/files/video_1355440448.flv
This is because you have two streams and output will be encoding then resizing, see your output messages:
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1
... you use adpcm_swf audio and yuv420p video
The answer is very simple, you need to put copy as your audio codec ...
See my example with video mpeg4,yuv420p and audio ac3 ...
ffmpeg -i input.mkv -vf scale=720:-1 -acodec copy -threads 12 output.mkv
this will change first size = 720 with aspect ratio = -1 (unknown). Also you need to use:
-acodec copy -threads 12
If don't use this you will have one error.
For example: When I used it, the output encoding messages show me this and it works well:
[h624 # 0x874e4a0] missing picture in access unit93 bitrate=1034.4kbits/s
Last message repeated 1163 times5974kB time=53.47 bitrate= 915.3kbits/s
You need to use for flv format file, something like this:
ffmpeg -i input.mp4 -c:v libx264 -crf 19 output.flv
You are given an error message
[flv # 0x68b1a80] requested bitrate is too low
You need to change bitrate to a valid. It is better if you use a different codec
-acodec libmp3lame
And remove the option -sameq. This option does NOT mean 'same quality'. Actually means 'same quantizers'!
I had a similar problem due to size constraints. The original image size was strange (width=1343), meaning that when I tried to specify a new size with -s, any rounding error caused problems. Make sure that the new image size can have the exact same aspect ratio!
I have got the same issue
- requested bitrate is too low
and just resolved this issue by lowering down the bit rate
by adding -b:a 32k