I am trying to take direct video output from a 4k Sony Handycam, via HDMI directly into a Blackmagic Intensity Pro 4K. I can verify that the camera, Hdmi and blackmagic card are working as I can capture and view video using the provided "Media Express" program. When use ffmpeg I do get video output but I also get a buffer overrun.
Here is the command:
time ffmpeg -f decklink -i "Intensity Pro 4K#20" -c:v nvenc -b:v 100M -vf yadif=0:-1:0" -pix_fmt yuv420p -crf 29.97 -strict -2 output.mp4
And I get the following output:
ffmpeg version N-76538-gb83c849 Copyright (c) 2000-2015 the FFmpeg
developers built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
configuration: --enable-nonfree --enable-nvenc --enable-nvresize --extra-cflags=-I../cudautils --extra-ldflags=-L../cudautils --enable-gpl --enable-libx264 --enable-libx265 --enable-decklink --extra-cflags=-I/home/tristan/Downloads/BlackmagicDeckLinkSDK10.6.5/Linux/include --extra-ldflags=-L/home/tristan/Downloads/BlackmagicDeckLinkSDK10.6.5/Linux/include
libavutil 55. 5.100 / 55. 5.100
libavcodec 57. 15.100 / 57. 15.100
libavformat 57. 14.100 / 57. 14.100
libavdevice 57. 0.100 / 57. 0.100
libavfilter 6. 15.100 / 6. 15.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
[decklink # 0x1ccd6e0] Found Decklink mode 3840 x 2160 with rate 29.97
[decklink # 0x1ccd6e0] Stream #1: not enough frames to estimate rate; consider increasing probesize
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, decklink, from 'Intensity Pro 4K#20':
Duration: N/A, start: 0.000000, bitrate: 1536 kb/s
Stream #0:0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s
Stream #0:1: Video: rawvideo (UYVY / 0x59565955), uyvy422, 3840x2160, -5 kb/s, 29.97 tbr, 1000k tbn, 29.97 tbc
Codec AVOption crf (Select the quality for constant quality mode) specified for output file #0 (output.mp4) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
File 'output.mp4' already exists. Overwrite ? [y/N] y
Output #0, mp4, to 'output.mp4':
Metadata:
encoder : Lavf57.14.100
Stream #0:0: Video: h264 (nvenc) ([33][0][0][0] / 0x0021), yuv420p, 3840x2160, q=-1--1, 100000 kb/s, 29.97 fps, 30k tbn, 29.97 tbc
Metadata:
encoder : Lavc57.15.100 nvenc
Stream #0:1: Audio: aac ([64][0][0][0] / 0x0040), 48000 Hz, stereo, fltp, 128 kb/s
Metadata:
encoder : Lavc57.15.100 aac
Stream mapping:
Stream #0:1 -> #0:0 (rawvideo (native) -> h264 (nvenc))
Stream #0:0 -> #0:1 (pcm_s16le (native) -> aac (native))
Press [q] to stop, [?] for help
[decklink # 0x1ccd6e0] Decklink input buffer overrun!:03.15 bitrate=70411.7kbits/s
Last message repeated 1 times
[decklink # 0x1ccd6e0] Decklink input buffer overrun!:03.54 bitrate=73110.9kbits/s
Last message repeated 20 times
[decklink # 0x1ccd6e0] Decklink input buffer overrun!:03.92 bitrate=76270.2kbits/s
Last message repeated 15 times
[decklink # 0x1ccd6e0] Decklink input buffer overrun!:04.28 bitrate=78367.6kbits/s
Last message repeated 61 times
frame= 140 fps= 22 q=-0.0 Lsize= 57266kB time=00:00:04.67 bitrate=100425.2kbits/s
video:57187kB audio:72kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.009844%
[decklink # 0x1ccd6e0] Decklink input buffer overrun!
Last message repeated 7 times
[aac # 0x1cd7020] Qavg: 215.556
real 0m8.808s
user 0m5.785s
sys 0m1.749s
Some sort of insight into this, be that just some commands that may fix it the issue, or otherwise.
Related
I need to take audio-streams from several IP cameras and merge them into one file, so that they would sound simaltaneousely.
I tried filter "amix": (for testing purposes I take audio-stream 2 times from the same camera. yes, I tried 2 cameras - result is the same)
ffmpeg -i rtsp://user:pass#172.22.5.202 -i rtsp://user:pass#172.22.5.202 -map 0:a -map 1:a -filter_complex amix=inputs=2:duration=first:dropout_transition=3 -ar 22050 -vn -f flv rtmp://172.22.45.38:1935/live/stream1
result: I say "hello". And hear in speakers the first "hello" and in 1 second I hear the second "hello". Instead of hearing two "hello"'s simaltaneousely.
and tried filter "amerge":
ffmpeg -i rtsp://user:pass#172.22.5.202 -i rtsp://user:pass#172.22.5.202 -map 0:a -map 1:a -filter_complex amerge -ar 22050 -vn -f flv rtmp://172.22.45.38:1935/live/stream1
result: the same as in the first example, but now I hear the first "hello" in left speaker and in 1 second I hear the second "hello" in right speaker, instead of hearing two "hello"'s in both speakers simaltaneousely.
So, the question is: how to make them sound simaltaneousely? May be you know some parameter? or some other command?
P.S. Here is ful command-line output for both variants if you need them:
amix:
[root#minjust ~]# ffmpeg -i rtsp://admin:12345#172.22.5.202 -i rtsp://admin:12345#172.22.5.202 -map 0:a -map 1:a -filter_complex amix=inputs=2:duration=longest:dropout_transition=0 -vn -ar 22050 -f flv rtmp://172.22.45.38:1935/live/stream1 ffmpeg version N-76031-g9099079 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-16)
configuration: --enable-gpl --enable-libx264 --enable-libmp3lame --enable-nonfree --enable-version3
libavutil 55. 4.100 / 55. 4.100
libavcodec 57. 6.100 / 57. 6.100
libavformat 57. 4.100 / 57. 4.100
libavdevice 57. 0.100 / 57. 0.100
libavfilter 6. 11.100 / 6. 11.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.100 / 2. 0.100
libpostproc 54. 0.100 / 54. 0.100
Input #0, rtsp, from 'rtsp://admin:12345#172.22.5.202':
Metadata:
title : Media Presentation
Duration: N/A, start: 0.032000, bitrate: N/A
Stream #0:0: Video: h264 (Baseline), yuv420p, 1280x720, 20 fps, 25 tbr, 90k tbn, 40 tbc
Stream #0:1: Audio: adpcm_g726, 8000 Hz, mono, s16, 16 kb/s
Stream #0:2: Data: none
Input #1, rtsp, from 'rtsp://admin:12345#172.22.5.202':
Metadata:
title : Media Presentation
Duration: N/A, start: 0.032000, bitrate: N/A
Stream #1:0: Video: h264 (Baseline), yuv420p, 1280x720, 20 fps, 25 tbr, 90k tbn, 40 tbc
Stream #1:1: Audio: adpcm_g726, 8000 Hz, mono, s16, 16 kb/s
Stream #1:2: Data: none
Output #0, flv, to 'rtmp://172.22.45.38:1935/live/stream1':
Metadata:
title : Media Presentation
encoder : Lavf57.4.100
Stream #0:0: Audio: mp3 (libmp3lame) ([2][0][0][0] / 0x0002), 22050 Hz, mono, fltp (default)
Metadata:
encoder : Lavc57.6.100 libmp3lame
Stream mapping:
Stream #0:1 (g726) -> amix:input0
Stream #1:1 (g726) -> amix:input1
amix -> Stream #0:0 (libmp3lame)
Press [q] to stop, [?] for help
[rtsp # 0x2689600] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[rtsp # 0x2727c60] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[rtsp # 0x2689600] max delay reached. need to consume packet
[NULL # 0x268c500] RTP: missed 38 packets
[rtsp # 0x2689600] max delay reached. need to consume packet
[NULL # 0x268d460] RTP: missed 4 packets
[flv # 0x2958360] Failed to update header with correct duration.
[flv # 0x2958360] Failed to update header with correct filesize.
size= 28kB time=00:00:06.18 bitrate= 36.7kbits/s
video:0kB audio:24kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 16.331224%
and amerge:
[root#minjust ~]# ffmpeg -i rtsp://admin:12345#172.22.5.202 -i rtsp://admin:12345#172.22.5.202 -map 0:a -map 1:a -filter_complex amerge -vn -ar 22050 -f flv rtmp://172.22.45.38:1935/live/stream1
ffmpeg version N-76031-g9099079 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-16)
configuration: --enable-gpl --enable-libx264 --enable-libmp3lame --enable-nonfree --enable-version3
libavutil 55. 4.100 / 55. 4.100
libavcodec 57. 6.100 / 57. 6.100
libavformat 57. 4.100 / 57. 4.100
libavdevice 57. 0.100 / 57. 0.100
libavfilter 6. 11.100 / 6. 11.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.100 / 2. 0.100
libpostproc 54. 0.100 / 54. 0.100
Input #0, rtsp, from 'rtsp://admin:12345#172.22.5.202':
Metadata:
title : Media Presentation
Duration: N/A, start: 0.064000, bitrate: N/A
Stream #0:0: Video: h264 (Baseline), yuv420p, 1280x720, 20 fps, 25 tbr, 90k tbn, 40 tbc
Stream #0:1: Audio: adpcm_g726, 8000 Hz, mono, s16, 16 kb/s
Stream #0:2: Data: none
Input #1, rtsp, from 'rtsp://admin:12345#172.22.5.202':
Metadata:
title : Media Presentation
Duration: N/A, start: 0.032000, bitrate: N/A
Stream #1:0: Video: h264 (Baseline), yuv420p, 1280x720, 20 fps, 25 tbr, 90k tbn, 40 tbc
Stream #1:1: Audio: adpcm_g726, 8000 Hz, mono, s16, 16 kb/s
Stream #1:2: Data: none
[Parsed_amerge_0 # 0x3069cc0] No channel layout for input 1
[Parsed_amerge_0 # 0x3069cc0] Input channel layouts overlap: output layout will be determined by the number of distinct input channels
Output #0, flv, to 'rtmp://172.22.45.38:1935/live/stream1':
Metadata:
title : Media Presentation
encoder : Lavf57.4.100
Stream #0:0: Audio: mp3 (libmp3lame) ([2][0][0][0] / 0x0002), 22050 Hz, stereo, s16p (default)
Metadata:
encoder : Lavc57.6.100 libmp3lame
Stream mapping:
Stream #0:1 (g726) -> amerge:in0
Stream #1:1 (g726) -> amerge:in1
amerge -> Stream #0:0 (libmp3lame)
Press [q] to stop, [?] for help
[rtsp # 0x2f71640] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[rtsp # 0x300fb40] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[rtsp # 0x2f71640] max delay reached. need to consume packet
[NULL # 0x2f744a0] RTP: missed 18 packets
[flv # 0x3058b00] Failed to update header with correct duration.
[flv # 0x3058b00] Failed to update header with correct filesize.
size= 39kB time=00:00:04.54 bitrate= 70.2kbits/s
video:0kB audio:36kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 8.330614%
Thanx.
UPDATE 30 oct 2015: I found interesting detail when connecting 2 cameras (they have different microphones and I hear the difference between them): the order of "Hello"'s from different cams depends on the ORDER OF INPUTS.
with command
ffmpeg -i rtsp://cam2 -i rtsp://cam1 -map 0:a -map 1:a -filter_complex amix=inputs=2:duration=longest:dropout_transition=0 -vn -ar 22050 -f flv rtmp://172.22.45.38:1935/live/stream1
I hear "hello" from 1st cam and then in 1 second "hello" from 2nd cam.
with command
ffmpeg -i rtsp://cam1 -i rtsp://cam2 -map 0:a -map 1:a -filter_complex amix=inputs=2:duration=longest:dropout_transition=0 -vn -ar 22050 -f flv rtmp://172.22.45.38:1935/live/stream1
I hear "hello" from 2nd cam and then in 1 second "hello" from 1st cam.
So, As I understand - ffmpeg takes inputs not simaltaneousely, but in the order of inputs given.
Question: how to tell ffmpeg to read inputs simaltaneousely?
If using amix with two local file work perfectly, you can't make to play two audio at once.
When the input is from local file or streaming, ffmpeg know exactly its start time. so it can be mixed into one audio.
But when the input is from Live streaming, ffmpeg doesn't know exactly "when it starts" so start time should be different in different streaming URL.
More Important thing is that ffmpeg does not support concurrency when handle input. that's why order of "hello" depends on order of inputs.
I just know one solution to solve this. Adobe FMLE(Flash Media Live Encoder), which support time code when using RTMP streaming. You can get time code from both live streaming, then you can finally mix two audio into one.
Perhaps, you can start with this article : http://www.overdigital.com/2013/03/25/3ways-to-sync-data/
Try
ffmpeg -i rtsp://user:pass#172.22.5.202 -i rtsp://user:pass#172.22.5.202 \
-filter_complex \
"[0:a]asetpts=PTS-STARTPTS[a1];[1:a]asetpts=PTS-STARTPTS[a2]; \
[a1][a2]amix=inputs=2:duration=first:dropout_transition=3[a] \
-map [a] -ar 22050 -vn -f flv rtmp://172.22.45.38:1935/live/stream1
I have rtmp stream created by flash player in h264 but when i convert it to video or tumbnail using ffmpeg it some times works after very very long time and some time not work but if I create a stream with Flash Media live encoder on same FMS server the command below works fine. At the same time if I try the stream in player it works well and fine.
I am using IP so DNS resolving issue is not possible either I think.
ffmpeg -i rtmp://xxx.xxx.xx.xx/live/bdeef2c065509361e78fa8cac90aac741cc5ee29 -r 1 -an -updatefirst 1 -y thumbnail.jpg
Following is when it worked aftre 15 - 20 minutes
ffmpeg -i "rtmp://xxx.xxx.xx.xx/live/bdeef2c065509361e78fa8cac90aac741cc5ee29 live=1" -r 1 -an -updatefirst 1 -y thumb.jpg
[root#test ~]# ffmpeg -i rtmp://38.125.41.20/live/bdeef2c065509361e78fa8cac90aac741cc5ee29 -r 1 -an -updatefirst 1 -y thumbnail.jpg
ffmpeg version N-49953-g7d0e3b1-syslint Copyright (c) 2000-2013 the FFmpeg developers
built on Feb 14 2013 15:29:40 with gcc 4.4.6 (GCC) 20120305 (Red Hat 4.4.6-4)
configuration: --prefix=/usr/local/cpffmpeg --enable-shared --enable-nonfree --enable-gpl --enable-pthreads --enable-libopencore-amrnb --enable-decoder=liba52 --enable-libopencore-amrwb --enable-libfaac --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --extra-cflags=-I/usr/local/cpffmpeg/include/ --extra-ldflags=-L/usr/local/cpffmpeg/lib --enable-version3 --extra-version=syslint
libavutil 52. 17.101 / 52. 17.101
libavcodec 54. 91.103 / 54. 91.103
libavformat 54. 63.100 / 54. 63.100
libavdevice 54. 3.103 / 54. 3.103
libavfilter 3. 37.101 / 3. 37.101
libswscale 2. 2.100 / 2. 2.100
libswresample 0. 17.102 / 0. 17.102
libpostproc 52. 2.100 / 52. 2.100
[flv # 0x14c0100] Stream #1: not enough frames to estimate rate; consider increasing probesize
[flv # 0x14c0100] Could not find codec parameters for stream 1 (Audio: none, 0 channels): unspecified sample format
Consider increasing the value for the 'analyzeduration' and 'probesize' options
[flv # 0x14c0100] Estimating duration from bitrate, this may be inaccurate
Input #0, flv, from 'rtmp://xxx.xxx.xx.xx/bdeef2c065509361e78fa8cac90aac741cc5ee29':
Metadata:
keyFrameInterval: 15
quality : 90
level : 3.1
bandwith : 0
codec : H264Avc
fps : 15
profile : baseline
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0: Video: h264 (Baseline), yuv420p, 640x480 [SAR 1:1 DAR 4:3], 15 tbr, 1k tbn, 30 tbc
Stream #0:1: Audio: none, 0 channels
Output #0, image2, to 'thumbnail.jpg':
Metadata:
keyFrameInterval: 15
quality : 90
level : 3.1
bandwith : 0
codec : H264Avc
fps : 15
profile : baseline
encoder : Lavf54.63.100
Stream #0:0: Video: mjpeg, yuvj420p, 640x480 [SAR 1:1 DAR 4:3], q=2-31, 200 kb/s, 90k tbn, 1 tbc
Stream mapping:
Stream #0:0 -> #0:0 (h264 -> mjpeg)
Press [q] to stop, [?] for help
frame= 2723 fps=1.3 q=1.6 size=N/A time=00:45:23.00 bitrate=N/A dup=8 drop=12044
and on stopping the stream by closing the browser running the flash player which is publishing the video I get the following
[flv # 0x23684e0] Could not find codec parameters for stream 1 (Audio: none, 0 channels): unspecified sample format
Consider increasing the value for the 'analyzeduration' and 'probesize' options
[flv # 0x23684e0] Estimating duration from bitrate, this may be inaccurate
Input #0, flv, from 'rtmp://xxx.xxx.xx.xx/live/bdeef2c065509361e78fa8cac90aac741cc5ee29':
Metadata:
keyFrameInterval: 15
quality : 90
bandwith : 0
level : 3.1
codec : H264Avc
fps : 15
profile : baseline
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0: Video: h264 (Baseline), yuv420p, 640x480 [SAR 1:1 DAR 4:3], 15 tbr, 1k tbn, 30 tbc
Stream #0:1: Audio: none, 0 channels
when if i stop the stream it quickly creates a thumbnail file where as running stream is an issue.
I found the reason and cause of this, if a stream created by flash not no microphone selected the audio channel is 0 in rtmp published stream so for that reason the audio codec part of rtmp goes into some kind of loop and not returns and goes further . I have found the cause . but looking for a way to get rid if this loop incase there is no audio channel . may be might have to modify the source code of rtmp and compile again .
I need to extract subtitles from different video files to .srt format (to use it in html5 video).
I tried a lot of variant i found with google. But every time i get this error:
Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
I think, this error means that ffmpeg can decode source subtitle but cant encode it to .srt format. All codecs a enabled (i compiled the later ffmpeg version from git a few times with a different configuration).
Here is the output:
# /usr/local/bin/ffmpeg -i /var/video/sources/Balbesy1.m2ts -an -vn -copyinkf -scodec srt -f srt -y sub.srt
ffmpeg version N-49947-g9f16cb9 Copyright (c) 2000-2013 the FFmpeg developers
built on Feb 14 2013 14:26:10 with gcc 4.4.5 (Debian 4.4.5-8)
configuration: --enable-encoder=dvdsub --enable-decoder=dvdsub --enable-decoder=pgssub --enable-encoder=srt --enable-decoder=srt --enable-encoder=srt --enable-decoder=srt
libavutil 52. 17.101 / 52. 17.101
libavcodec 54. 91.103 / 54. 91.103
libavformat 54. 63.100 / 54. 63.100
libavdevice 54. 3.103 / 54. 3.103
libavfilter 3. 37.101 / 3. 37.101
libswscale 2. 2.100 / 2. 2.100
libswresample 0. 17.102 / 0. 17.102
[mpegts # 0x2719040] Stream #5: not enough frames to estimate rate; consider increasing probesize
[mpegts # 0x2719040] Could not find codec parameters for stream 5 (Subtitle: hdmv_pgs_subtitle ([144][0][0][0] / 0x0090)): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
[NULL # 0x271fec0] start time is not set in estimate_timings_from_pts
Input #0, mpegts, from '/var/video/sources/Balbesy1.m2ts':
Duration: 01:18:11.89, start: 599.958300, bitrate: 37378 kb/s
Program 1
Stream #0:0[0x1011]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 23.98 fps, 23.98 tbr, 90k tbn, 47.95 tbc
Stream #0:1[0x1100]: Audio: dts (DTS-HD MA) ([134][0][0][0] / 0x0086), 48000 Hz, 5.1(side), fltp, 768 kb/s
Stream #0:2[0x1101]: Audio: dts (DTS-HD MA) ([134][0][0][0] / 0x0086), 48000 Hz, 5.1(side), fltp, 768 kb/s
Stream #0:3[0x1102]: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, 5.1(side), fltp, 448 kb/s
Stream #0:4[0x1103]: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, 5.1(side), fltp, 448 kb/s
Stream #0:5[0x1200]: Subtitle: hdmv_pgs_subtitle ([144][0][0][0] / 0x0090)
Output #0, srt, to 'sub.srt':
Stream #0:0: Subtitle: srt
Stream mapping:
Stream #0:5 -> #0:0 (pgssub -> srt)
Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
sorry for my english
The mp3 has an image in it, maybe some album images. When I use ffmpeg to convert it to mp4, it goes wrong. But if I convert an mp3 without an image, it succeeds.
My command is like this:
ffmpeg -i input.mp3 output.mp4
Here's the error:
Stream mapping:
Stream #0:1 -> #0:0 (mjpeg -> mpeg4)
Stream #0:0 -> #0:1 (mp3 -> aac)
Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
Here is all the console output:
ellodeiMac:mine ello$ ffmpeg -frames 0 -i 4.mp3 -y test.mp4
ffmpeg version 0.11.2 Copyright (c) 2000-2012 the FFmpeg developers
built on Oct 24 2012 12:21:13 with llvm_gcc 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.9.00)
configuration: --disable-yasm
libavutil 51. 54.100 / 51. 54.100
libavcodec 54. 23.100 / 54. 23.100
libavformat 54. 6.100 / 54. 6.100
libavdevice 54. 0.100 / 54. 0.100
libavfilter 2. 77.100 / 2. 77.100
libswscale 2. 1.100 / 2. 1.100
libswresample 0. 15.100 / 0. 15.100
[mp3 # 0x7fa12301ae00] max_analyze_duration 5000000 reached at 5015510
Input #0, mp3, from '4.mp3':
Metadata:
artist : 贵族乐团
album : 美声天籁
title : 肖邦离别曲
Tagging time : 2012-09-18T08:12:10
Duration: 00:04:01.44, start: 0.000000, bitrate: 129 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16, 128 kb/s
Stream #0:1: Video: mjpeg, yuvj420p, 240x240 [SAR 1:1 DAR 1:1], 90k tbr, 90k tbn, 90k tbc
Metadata:
title : e
comment : Cover (front)
[buffer # 0x109115780] w:240 h:240 pixfmt:yuvj420p tb:1/90000 sar:1/1 sws_param:flags=2
[buffersink # 0x109133720] No opaque field provided
[format # 0x1091338e0] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'format'
[scale # 0x109133bc0] w:240 h:240 fmt:yuvj420p sar:1/1 -> w:240 h:240 fmt:yuv420p sar:1/1 flags:0x4
[mp4 # 0x7fa123035c00] Frame rate very high for a muxer not efficiently supporting it.
Please consider specifying a lower framerate, a different muxer or -vsync 2
[aformat # 0x109136ec0] auto-inserting filter 'auto-inserted resampler 0' between the filter 'src' and the filter 'aformat'
[aresample # 0x1091370c0] chl:stereo fmt:s16 r:44100Hz -> chl:stereo fmt:flt r:44100Hz
[mpeg4 # 0x7fa12303be00] timebase 1/90000 not supported by MPEG 4 standard, the maximum
admitted value for the timebase denominator is 65535
Output #0, mp4, to 'test.mp4':
Metadata:
artist : 贵族乐团
album : 美声天籁
title : 肖邦离别曲
Tagging time : 2012-09-18T08:12:10
Stream #0:0: Video: mpeg4, yuv420p, 240x240 [SAR 1:1 DAR 1:1], q=2-31, 200 kb/s, 90k tbn, 90k tbc
Metadata:
title : e
comment : Cover (front)
Stream #0:1: Audio: none, 44100 Hz, stereo, flt, 128 kb/s
Stream mapping:
Stream #0:1 -> #0:0 (mjpeg -> mpeg4)
Stream #0:0 -> #0:1 (mp3 -> aac)
Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
Use -vn to remove the video stream.
ffmpeg -i input.mp3 -vn output.mp4
I convert AVI to FLV with ffmpeg using -sameq parameter (same quality):
ffmpeg -i test.avi -sameq -f flv sameq.flv
The resulting file has the same video and audio quality as the original, but it's more than twice the original file size:
84M sameq.flv
41M test.avi
Why does it happen?
Transcoder output:
ffmpeg version N-34750-g070d2d7, Copyright (c) 2000-2011 the FFmpeg developers
built on Nov 12 2011 11:23:07 with gcc 4.6.1
configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-x11grab
libavutil 51. 24. 1 / 51. 24. 1
libavcodec 53. 33. 0 / 53. 33. 0
libavformat 53. 20. 0 / 53. 20. 0
libavdevice 53. 4. 0 / 53. 4. 0
libavfilter 2. 48. 0 / 2. 48. 0
libswscale 2. 1. 0 / 2. 1. 0
libpostproc 51. 2. 0 / 51. 2. 0
Input #0, avi, from 'test.avi':
Duration: 00:06:30.00, start: 0.000000, bitrate: 866 kb/s
Stream #0:0: Video: mpeg4 (Advanced Real Time Simple Profile) (DIVX / 0x58564944), yuv420p, 400x300 [SAR 1:1 DAR 4:3], 25 tbr, 25 tbn, 25 tbc
Stream #0:1: Audio: mp3 (U[0][0][0] / 0x0055), 44100 Hz, mono, s16, 64 kb/s
[buffer # 0xa247ae0] w:400 h:300 pixfmt:yuv420p tb:1/1000000 sar:1/1 sws_param:
Output #0, flv, to 'sameq.flv':
Metadata:
encoder : Lavf53.20.0
Stream #0:0: Video: flv1 ([2][0][0][0] / 0x0002), yuv420p, 400x300 [SAR 1:1 DAR 4:3], q=2-31, 200 kb/s, 1k tbn, 25 tbc
Stream #0:1: Audio: mp3 ([2][0][0][0] / 0x0002), 44100 Hz, mono, s16, 128 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (mpeg4 -> flv)
Stream #0:1 -> #0:1 (mp3 -> libmp3lame)
Press [q] to stop, [?] for help
frame= 9742 fps=255 q=0.0 Lsize= 85074kB time=00:06:30.00 bitrate=1787.0kbits/s
video:79163kB audio:5525kB global headers:0kB muxing overhead 0.455568%
Two thing comes to mind:
Compress a video without audio stream to eliminate the audio portion of this issue. BTW, the audio source is HALF the bitrate of the output, that increases the size a little. Use -ar and -ab switches to control the output.
Check out this article on qscale vs quality using -qscale option. Add in the -b (bitrate) and -s (size) and tweak it to your needs.
When all fails, there are a few switches you can try from the ffmpeg website or try using the new H.264 compression, the two pass option is recommended. Have fun compressing
its because of -sameq. It gives you a good quality but pay the price with a bigger file size.
Can you try adding:
-qcomp 1.0
video quantizer scale compression ( VBR ) (default 0.5). Constant of ratecontrol equation. Recommended range for default rc_eq: 0.0-1.0