Error while opening encoder for output stream #0.0 - maybe incorrect parameters such as bit_rate, rate, width or height - ffmpeg

I am using this command to convert an avi,mov,m4v video files to flv format via FFMPEG
/usr/local/bin/ffmpeg -i '/home/public_html/files/video_1355440448.m4v' -s '640x360' -sameq -ab '64k' -ar '44100' -f 'flv' -y /home/public_html/files/video_1355440448.flv
[flv # 0x68b1a80] requested bitrate is too low
Output #0, flv, to '/home/files/1355472099-50cadce349290.flv':
Stream #0.0: Video: flv, yuv420p, 640x360, q=2-31, pass 2, 200 kb/s, 90k tbn, 25 tbc
Stream #0.1: Audio: adpcm_swf, 44100 Hz, 2 channels, s16, 64 kb/s
Stream mapping:
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1
Error while opening encoder for output stream #0.0 - maybe incorrect parameters such as bit_rate, rate, width or height
-------------------------------
RESULT
-------------------------------
Execute error. Output for file "/home/public_html/files/video_1355472099.avi" was found, but the file contained no data. Please check the available codecs compiled with FFmpeg can support this type of conversion. You can check the encode decode availability by inspecting the output array from PHPVideoToolkit::getFFmpegInfo().
But if I manually used this command then its working
/usr/local/bin/ffmpeg -i '/home/public_html/files/video_1355440448.m4v' -s '640x360' -sameq -ab '64k' -ar '44100' -f 'flv' -y /home/public_html/files/video_1355440448.flv

This is because you have two streams and output will be encoding then resizing, see your output messages:
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1
... you use adpcm_swf audio and yuv420p video
The answer is very simple, you need to put copy as your audio codec ...
See my example with video mpeg4,yuv420p and audio ac3 ...
ffmpeg -i input.mkv -vf scale=720:-1 -acodec copy -threads 12 output.mkv
this will change first size = 720 with aspect ratio = -1 (unknown). Also you need to use:
-acodec copy -threads 12
If don't use this you will have one error.
For example: When I used it, the output encoding messages show me this and it works well:
[h624 # 0x874e4a0] missing picture in access unit93 bitrate=1034.4kbits/s
Last message repeated 1163 times5974kB time=53.47 bitrate= 915.3kbits/s
You need to use for flv format file, something like this:
ffmpeg -i input.mp4 -c:v libx264 -crf 19 output.flv

You are given an error message
[flv # 0x68b1a80] requested bitrate is too low
You need to change bitrate to a valid. It is better if you use a different codec
-acodec libmp3lame
And remove the option -sameq. This option does NOT mean 'same quality'. Actually means 'same quantizers'!

I had a similar problem due to size constraints. The original image size was strange (width=1343), meaning that when I tried to specify a new size with -s, any rounding error caused problems. Make sure that the new image size can have the exact same aspect ratio!

I have got the same issue
- requested bitrate is too low
and just resolved this issue by lowering down the bit rate
by adding -b:a 32k

Related

Does Webm support cover art?

I am converting MP3 to Webm and the MP3 file includes a video stream for the cover art.
ffprobe filename.mp3
...
Stream #0:0: Audio: mp3, 22050 Hz, stereo, fltp, 64 kb/s
Stream #0:1: Video: mjpeg (Baseline), yuvj444p(pc, bt470bg/unknown/unknown), 300x300, 90k tbr, 90k tbn, 90k tbc (attached pic)
Using ffmpeg with libopus codec to convert the file causes a VP9 video stream that doesn't work well. I noticed:
VLC Player doesn't show the duration and the progress scrubber doesn't move when playing.
Android Media Player doesn't show image for the cover art of the track.
ffprobe filename.webm
...
Input #0, matroska,webm, from 'webm_bad/B01___01_Matthew_____ENGWEBN2DA.webm':
...
Stream #0:0: Video: vp9 (Profile 1), yuv444p(tv, progressive), 300x300, SAR 1:1 DAR 1:1, 1k tbr, 1k tbn, 1k tbc (default)
If I tried to use -vcodec copy option, then I get this error:
[webm # 0x7fdddf028e00] Only VP8 or VP9 or AV1 video and Vorbis or Opus audio and WebVTT subtitles are supported for WebM.
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:1 --
Does WebM support cover art? If so, how do I transfer the MP3 cover art over using ffmpeg (or other tool)?
No, WebM does not support cover art.
From the FAQ:
The WebM file structure is based on the Matroska media container.
The cover art in a Matroska container is stored in an attachment:
Attachment Elements can be used to store related cover art, [...]
A WebM container does not support attachments:
Attachment
WebM Support
Element Name
Description
Unsupported
Attachments
Contain attached files.
Unsupported
AttachedFile
An attached file.
Unsupported
FileDescription
A human-friendly name for the attached file.
Unsupported
FileName
Filename of the attached file.
Unsupported
FileMimeType
MIME type of the file.
Unsupported
FileData
The data of the file.
Unsupported
FileUID
Unique ID representing the file, as random as possible.
Unsupported
FileReferral
A binary value that a track/codec can refer to when the attachment is needed.
Unsupported
FileUsedStartTime
DivX font extension
Unsupported
FileUsedEndTime
DivX font extension
Maybe you can consider using a different container. Opus audio streams, like the ones in a WebM container, are supported by other containers:
Opus was originally specified for encapsulation in Ogg containers
If you still want to use WebM, an alternative would be to create a video stream with a still image along with an audio stream. The FFmpeg wiki covers that topic in the Slideshow page. Combining that with this answer, which explains how to extract the cover art of an MP3 file, you could do the following:
ffmpeg -i filename.mp3 -an -c:v copy cover.jpeg
ffmpeg -loop 1 -i cover.jpeg -i filename.mp3 -c:v libvpx-vp9 -c:a libopus -b:a 64k -shortest filename.webm
64k is the bitrate that you show in the output of ffprobe.
The encoding might be slow with the second command. The Encode/Youtube page in the FFmpeg wiki shows an example command to create a video with an still image that uses the -framerate 2 option, like this:
ffmpeg -loop 1 -framerate 2 -i cover.jpeg -i filename.mp3 -c:v libvpx-vp9 -c:a libopus -b:a 64k -shortest filename.webm
For some reason I do not know, the output video of that last command cannot be reproduced by my VLC and the player crashes. 6 was the minimum -framerate that did not crash my player, so be careful.

ignore "channel_layout" when working with multichannel audio in ffmpeg

I'm working with multichannel audio files (higher-order ambisonics), that typically have at least 16 channels.
Sometimes I'm only interested in a subset of the audiochannels (e.g. the first 25 channels of a file that contains even more channels).
For this I have a script like the following, that takes a multichannel input file, an output file and the number of channels I want to extract:
#!/bin/sh
infile=$1
outfile=$2
channels=$3
channelmap=$(seq -s"|" 0 $((channels-1)))
ffmpeg -y -hide_banner \
-i "${infile}" \
-filter_complex "[0:a]channelmap=${channelmap}" \
-c:a libopus -mapping_family 255 -b:a 160k -sample_fmt s16 -vn -f webm -dash 1 \
"${outfile}"
The actual channel extraction is done via the channelmap filter, that is invoked with something like -filter:complex "[0:a]channelmap=0|1|2|3"
This works great with 1,2,4 or 16 channels.
However, it fails with 9 channels, and 25 and 17 (and generally anything with >>16 channels).
The error I get is:
$ ffmpeg -y -hide_banner -i input.wav -filter_complex "[0:a]channelmap=0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16" -c:a libopus -mapping_family 255 -b:a 160k -sample_fmt s16 -vn -f webm -dash 1 output.webm
Input #0, wav, from 'input.wav':
Duration: 00:00:09.99, bitrate: 17649 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 25 channels, s16, 17640 kb/s
[Parsed_channelmap_0 # 0x5568874ffbc0] Output channel layout is not set and cannot be guessed from the maps.
[AVFilterGraph # 0x5568874fff40] Error initializing filter 'channelmap' with args '0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16'
Error initializing complex filters.
Invalid argument
So ffmpeg cannot guess the channel layout for a 17 channel file.
ffmpeg -layouts only lists channel layouts with 1,2,3,4,5,6,7,8 & 16.
However, I really don't care about the channel layout. The entire concept of "channel layout" is centered around the idea, that each audio channel should go to a different speaker.
But my audio channels are not speaker feeds at all.
So I tried providing explicit channel layouts, with something like -filter_complex "[0:a]channelmap=map=0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16:channel_layout=unknown", but this results in an error when parsing the channel layout:
$ ffmpeg -y -hide_banner -i input.wav -filter_complex "[0:a]channelmap=map=0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16:channel_layout=unknown" -c:a libopus -mapping_family 255 -b:a 160k -sample_fmt s16 -vn -f webm -dash 1 output.webm
Input #0, wav, from 'input.wav':
Duration: 00:00:09.99, bitrate: 17649 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 25 channels, s16, 17640 kb/s
[Parsed_channelmap_0 # 0x55b60492bf80] Error parsing channel layout: 'unknown'.
[AVFilterGraph # 0x55b604916d00] Error initializing filter 'channelmap' with args 'map=0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16:channel_layout=unknown'
Error initializing complex filters.
Invalid argument
I also tried values like any, all, none, 0x0 and 0xFF with the same result.
I tried using mono (as the channels are kind-of independent), but ffmpeg is trying to be clever and tells me that a mono layout must not have 17 channels.
I know that ffmpeg can handle multi-channel files without a layout.
E.g. converting a 25-channel file without the -filter_complex "..." works without problems, and ffprobe gives me an unknown channel layout.
So: how do I tell ffmpeg to just not care about the channel_layout when creating an output file that only contains a subset of the input channels?
Based on Audio Channel Manipulation you could try splitting into n separate streams the amerge them back together:
-filter_complex "\
[0:a]pan=mono|c0=c0[a0];\
[0:a]pan=mono|c0=c1[a1];\
[0:a]pan=mono|c0=c2[a2];\
[0:a]pan=mono|c0=c3[a3];\
[0:a]pan=mono|c0=c4[a4];\
[0:a]pan=mono|c0=c5[a5];\
[0:a]pan=mono|c0=c6[a6];\
[0:a]pan=mono|c0=c7[a7];\
[0:a]pan=mono|c0=c8[a8];\
[0:a]pan=mono|c0=c9[a9];\
[0:a]pan=mono|c0=c10[a10];\
[0:a]pan=mono|c0=c11[a11];\
[0:a]pan=mono|c0=c12[a12];\
[0:a]pan=mono|c0=c13[a13];\
[0:a]pan=mono|c0=c14[a14];\
[0:a]pan=mono|c0=c15[a15];\
[0:a]pan=mono|c0=c16[a16];\
[a0][a1][a2][a3][a4][a5][a6][a7][a8][a9][a10][a11][a12][a13][a14][a15][a16]amerge=inputs=17"
Building on the answer from #aergistal, and working with an mxf file with 10 audio streams, I had to modify the filter in order to specify the input to every pan filter. Working with "pan=mono" it only uses one channel identified as c0
-filter_complex "\
[0:a:0]pan=mono|c0=c0[a0];\
[0:a:1]pan=mono|c0=c0[a1];\
[0:a:2]pan=mono|c0=c0[a2];\
[0:a:3]pan=mono|c0=c0[a3];\
[0:a:4]pan=mono|c0=c0[a4];\
[0:a:5]pan=mono|c0=c0[a5];\
[0:a:6]pan=mono|c0=c0[a6];\
[0:a:7]pan=mono|c0=c0[a7];\
[0:a:8]pan=mono|c0=c0[a8];\
[0:a:9]pan=mono|c0=c0[a9];\
[a0][a1][a2][a3][a4][a5][a6][a7][a8][a9]amerge=inputs=10"

FFMPEG amerge fails when one audio stream is missing

I am trying to combine two video files (of size 320x240) and create a single, horizontally extended output video file (of size 640x240) but for the audio merging, the command fails when one of the input files does not contain audio stream.
Here's the command I am using:
C:\ffmpeg\bin\ffmpeg.exe -y -i "input1.flv" -i "input2.flv" -filter_complex "nullsrc=size=640x240[base];[0:v]scale=320x240[upperleft];[1:v]scale=320x240[upperright];[base][upperleft]overlay=shortest=1[tmp1];[tmp1][upperright]overlay=shortest=1:x=320:y=0;[0:a][1:a]amerge=inputs=2[aout]" -map [aout] -ac 2 "output.mp4"
This command works fine when both input1.flv and input2.flv contain audio tracks. When either one lacks an audio track, the command gives the following error:
[flv # 0000000004300660] Stream discovered after head already parsed
[flv # 0000000004300660] Could not find codec parameters for stream 1
(Audio: none, 0 channels): unspecified sample format Consider
increasing the value for the 'analyzeduration' and 'probesize' options
Input #1, flv, from 'input2.flv': Metadata:
creationdate : Tue Jan 26 16:50:12 Duration: 00:25:59.10, start: 0.000000, bitrate: 212 kb/s
Stream #1:0: Video: flv1, yuv420p, 320x240, 1k tbr, 1k tbn, 1k tbc
Stream #1:1: Audio: none, 0 channels
Stream #1:2: Data: none [abuffer # 0000000004335620] Value inf for parameter 'time_base' out of range [0 - 2.14748e+009] [abuffer #
0000000004335620] Unable to parse option value "(null)" as sample
format [abuffer # 0000000004335620] Value inf for parameter
'time_base' out of range [0 - 2.14748e+009] [abuffer #
0000000004335620] Error setting option time_base to value 1/0. [graph
0 input from stream 1:1 # 00000000042e4d60] Error applying options to
the filter. Error configuring filters.
Is there a way to make this command work even when one audio stream lacks an audio track or both of the audio streams lack audio tracks?
There's one preparatory command that needs to be executed only once,to generate a file
ffmpeg -f lavfi -t 1 -i anullsrc:r=48000 silence.mkv
For each flv,
ffmpeg -i input1.flv -analyzeduration 10M -i silence.mkv -c copy -map 0 -map 1 input1a.mkv
ffmpeg -i input2.flv -analyzeduration 10M -i silence.mkv -c copy -map 0 -map 1 input2a.mkv
And then,
ffmpeg -i input1a.mkv -i input2a.mkv -filter_complex "[0:v][1:v]hstack=shortest=1[v];[0:a][1:a]amerge[a]" -map [v] -map [a] -ac 2 "output.mp4"

FFMPEG attach file as metadata

I have a set of images which I want to convert to a video using ffmpeg. The following command works perfectly fine:
ffmpeg -y -i frames/%06d.png -c:v huffyuv -pix_fmt rgb24 testout.mkv
I have some meta data in a binary file which I want to attach with the video. I tried doing the following, but it gives me an error:
ffmpeg -y -i frames/%06d.png -c:v huffyuv -pix_fmt rgb24 -attach mybinaryfile -metadata:s:2 mimetype=application/octet-stream testout.mkv
This is the error:
[matroska # 0x656460] Codec for stream 1 does not use global headers but container format requires global headers
[matroska # 0x656460] Attachment stream 1 has no mimetype tag and it cannot be deduced from the codec id.
Output #0, matroska, to 'testout.mkv':
Metadata:
encoder : Lavf56.33.101
Stream #0:0: Video: huffyuv (HFYU / 0x55594648), rgb24, 640x640, q=2-31, 200 kb/s, 25 fps, 1k tbn, 25 tbc
Metadata:
encoder : Lavc56.39.100 huffyuv
Stream #0:1: Attachment: none
Metadata:
filename : 2ceb-1916-56bb-3e10
Stream mapping:
Stream #0:0 -> #0:0 (png (native) -> huffyuv (native))
File 2ceb-1916-56bb-3e10 -> Stream #0:1
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
It would be wonderful if somebody can explain to me what am I doing wrong :)
You need to specify your stream properly
Example:
ffmpeg -y -i frames/%06d.png -c:v huffyuv -pix_fmt rgb24 -attach mybinaryfile \
-metadata:s:t mimetype=application/octet-stream testout.mkv
This command will set the metadata for all attachment (t) streams (s). If you have more than one attachment, and the metadata are different, then you will have to be more specific, such as:
-metadata:s:t:0 mimetype=text/plain \
-metadata:s:t:1 mimetype=application/gzip
This will set the metadata for the first attachment as mimetype=text/plain, and the second as mimetype=application/gzip. Remember that the stream index starts at 0, so the first steam is labeled 0.
What was wrong with your command
Using -metadata:s:2 (which appears to have been copied verbatim from the documentation) sets the metadata for the third stream, regardless of stream type (because no specifier is present), but your output only contained two streams.
Attachment: None
You may see something like this:
Output #0, matroska, to 'output.mkv':
...
Stream #0:1: Attachment: none
Metadata:
filename : 2ceb-1916-56bb-3e10
mimetype : application/octet-stream
Attachment: none does not mean that there is no attachment, but that there is no format associated with it, so it can be ignored.
Also see
Stream specifiers and the ffmpeg documentation on -attach, -metadata, and -map_metadata for more details.

how to convert videos to flv using ffmpeg in php?

i am trying to convert some different video formats to flv using ffmpeg. But it seems that only some videos go through.
ffmpeg -i /var/www/tmp/91640.avi -ar 22050 -ab 32 -f flv /var/www/videos/91640.flv
here is some debug info:
Seems stream 0 codec frame rate differs from container frame rate: 23.98 (65535/2733) -> 23.98 (5000000/208541)
Input #0, avi, from '/var/www/tmp/91640.avi':
Duration: 00:01:12.82, start: 0.000000, bitrate: 5022 kb/s
Stream #0.0: Video: mpeg4, yuv420p, 1280x528 [PAR 1:1 DAR 80:33], 23.98 tbr, 23.98 tbn, 23.98 tbc
Stream #0.1: Audio: ac3, 48000 Hz, 5.1, s16, 448 kb/s
WARNING: The bitrate parameter is set too low. It takes bits/s as argument, not kbits/s
Output #0, flv, to '/var/www/videos/91640.flv':
Stream #0.0: Video: flv, yuv420p, 1280x528 [PAR 1:1 DAR 80:33], q=2-31, 200 kb/s, 90k tbn, 23.98 tbc
Stream #0.1: Audio: adpcm_swf, 22050 Hz, 5.1, s16, 0 kb/s
Stream mapping:
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1
Error while opening codec for output stream #0.1 - maybe incorrect parameters such as bit_rate, rate, width or height
also, if i try to grab one frame ad convert it to jpeg i get an error as well
ffmpeg -i /var/www/tmp/91640.avi -an -ss 00:00:03 -t 00:00:01 -r 1 -y /var/www/videos/91640.jpg
debug info
...
[mpeg4 # 0x1d7d810]Invalid and inefficient vfw-avi packed B frames detected
av_interleaved_write_frame(): I/O error occurred
Usually that means that input file is truncated and/or corrupted.
im thinking that the image fails because the video conversion failed in the first place, not sure though
any ideas what goes wrong?
Bits, not kbits
From your console output:
WARNING: The bitrate parameter is set too low. It takes bits/s as argument, not kbits/s
Use 32k, not just 32.
Only stereo or mono is supported
The encoder adpcm_swf ony supports mono or stereo, so add -ac 2 as an output option. The console output would have suggested this if you were using a recent ffmpeg build.
Use -vframes 1 for single image outputs
Instead of -t 00:00:01 -r 1 use -vframes 1.
A better encoder
Instead of using the encoders flv and adpcm_swf, I recommend libx264 and libmp3lame:
ffmpeg -i input -vcodec libx264 -preset medium -crf 23 -acodec libmp3lame -ar 44100 -q:a 5 output.flv
-preset – Controls the encoding speed to compression ratio. Use the slowest preset you have patience for: ultrafast,superfast, veryfast, faster, fast, medium, slow, slower, veryslow.
-crf – Constant Rate Factor. A lower value is a higher quality. Range is 0-51 for this encoder. 0 is lossless, 18 is roughly "visually lossless", 23 is default, and 51 is worst quality. Use the highest value that still gives an acceptable quality.
-q:a – Audio quality for libmp3lame. Range is 0-9 for this encoder. A lower value is a higher quality.
Also see
FFmpeg and x264 Encoding Guide
Encoding VBR (Variable Bit Rate) mp3 audio

Resources