I need to take audio-streams from several IP cameras and merge them into one file, so that they would sound simaltaneousely.
I tried filter "amix": (for testing purposes I take audio-stream 2 times from the same camera. yes, I tried 2 cameras - result is the same)
ffmpeg -i rtsp://user:pass#172.22.5.202 -i rtsp://user:pass#172.22.5.202 -map 0:a -map 1:a -filter_complex amix=inputs=2:duration=first:dropout_transition=3 -ar 22050 -vn -f flv rtmp://172.22.45.38:1935/live/stream1
result: I say "hello". And hear in speakers the first "hello" and in 1 second I hear the second "hello". Instead of hearing two "hello"'s simaltaneousely.
and tried filter "amerge":
ffmpeg -i rtsp://user:pass#172.22.5.202 -i rtsp://user:pass#172.22.5.202 -map 0:a -map 1:a -filter_complex amerge -ar 22050 -vn -f flv rtmp://172.22.45.38:1935/live/stream1
result: the same as in the first example, but now I hear the first "hello" in left speaker and in 1 second I hear the second "hello" in right speaker, instead of hearing two "hello"'s in both speakers simaltaneousely.
So, the question is: how to make them sound simaltaneousely? May be you know some parameter? or some other command?
P.S. Here is ful command-line output for both variants if you need them:
amix:
[root#minjust ~]# ffmpeg -i rtsp://admin:12345#172.22.5.202 -i rtsp://admin:12345#172.22.5.202 -map 0:a -map 1:a -filter_complex amix=inputs=2:duration=longest:dropout_transition=0 -vn -ar 22050 -f flv rtmp://172.22.45.38:1935/live/stream1 ffmpeg version N-76031-g9099079 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-16)
configuration: --enable-gpl --enable-libx264 --enable-libmp3lame --enable-nonfree --enable-version3
libavutil 55. 4.100 / 55. 4.100
libavcodec 57. 6.100 / 57. 6.100
libavformat 57. 4.100 / 57. 4.100
libavdevice 57. 0.100 / 57. 0.100
libavfilter 6. 11.100 / 6. 11.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.100 / 2. 0.100
libpostproc 54. 0.100 / 54. 0.100
Input #0, rtsp, from 'rtsp://admin:12345#172.22.5.202':
Metadata:
title : Media Presentation
Duration: N/A, start: 0.032000, bitrate: N/A
Stream #0:0: Video: h264 (Baseline), yuv420p, 1280x720, 20 fps, 25 tbr, 90k tbn, 40 tbc
Stream #0:1: Audio: adpcm_g726, 8000 Hz, mono, s16, 16 kb/s
Stream #0:2: Data: none
Input #1, rtsp, from 'rtsp://admin:12345#172.22.5.202':
Metadata:
title : Media Presentation
Duration: N/A, start: 0.032000, bitrate: N/A
Stream #1:0: Video: h264 (Baseline), yuv420p, 1280x720, 20 fps, 25 tbr, 90k tbn, 40 tbc
Stream #1:1: Audio: adpcm_g726, 8000 Hz, mono, s16, 16 kb/s
Stream #1:2: Data: none
Output #0, flv, to 'rtmp://172.22.45.38:1935/live/stream1':
Metadata:
title : Media Presentation
encoder : Lavf57.4.100
Stream #0:0: Audio: mp3 (libmp3lame) ([2][0][0][0] / 0x0002), 22050 Hz, mono, fltp (default)
Metadata:
encoder : Lavc57.6.100 libmp3lame
Stream mapping:
Stream #0:1 (g726) -> amix:input0
Stream #1:1 (g726) -> amix:input1
amix -> Stream #0:0 (libmp3lame)
Press [q] to stop, [?] for help
[rtsp # 0x2689600] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[rtsp # 0x2727c60] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[rtsp # 0x2689600] max delay reached. need to consume packet
[NULL # 0x268c500] RTP: missed 38 packets
[rtsp # 0x2689600] max delay reached. need to consume packet
[NULL # 0x268d460] RTP: missed 4 packets
[flv # 0x2958360] Failed to update header with correct duration.
[flv # 0x2958360] Failed to update header with correct filesize.
size= 28kB time=00:00:06.18 bitrate= 36.7kbits/s
video:0kB audio:24kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 16.331224%
and amerge:
[root#minjust ~]# ffmpeg -i rtsp://admin:12345#172.22.5.202 -i rtsp://admin:12345#172.22.5.202 -map 0:a -map 1:a -filter_complex amerge -vn -ar 22050 -f flv rtmp://172.22.45.38:1935/live/stream1
ffmpeg version N-76031-g9099079 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-16)
configuration: --enable-gpl --enable-libx264 --enable-libmp3lame --enable-nonfree --enable-version3
libavutil 55. 4.100 / 55. 4.100
libavcodec 57. 6.100 / 57. 6.100
libavformat 57. 4.100 / 57. 4.100
libavdevice 57. 0.100 / 57. 0.100
libavfilter 6. 11.100 / 6. 11.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.100 / 2. 0.100
libpostproc 54. 0.100 / 54. 0.100
Input #0, rtsp, from 'rtsp://admin:12345#172.22.5.202':
Metadata:
title : Media Presentation
Duration: N/A, start: 0.064000, bitrate: N/A
Stream #0:0: Video: h264 (Baseline), yuv420p, 1280x720, 20 fps, 25 tbr, 90k tbn, 40 tbc
Stream #0:1: Audio: adpcm_g726, 8000 Hz, mono, s16, 16 kb/s
Stream #0:2: Data: none
Input #1, rtsp, from 'rtsp://admin:12345#172.22.5.202':
Metadata:
title : Media Presentation
Duration: N/A, start: 0.032000, bitrate: N/A
Stream #1:0: Video: h264 (Baseline), yuv420p, 1280x720, 20 fps, 25 tbr, 90k tbn, 40 tbc
Stream #1:1: Audio: adpcm_g726, 8000 Hz, mono, s16, 16 kb/s
Stream #1:2: Data: none
[Parsed_amerge_0 # 0x3069cc0] No channel layout for input 1
[Parsed_amerge_0 # 0x3069cc0] Input channel layouts overlap: output layout will be determined by the number of distinct input channels
Output #0, flv, to 'rtmp://172.22.45.38:1935/live/stream1':
Metadata:
title : Media Presentation
encoder : Lavf57.4.100
Stream #0:0: Audio: mp3 (libmp3lame) ([2][0][0][0] / 0x0002), 22050 Hz, stereo, s16p (default)
Metadata:
encoder : Lavc57.6.100 libmp3lame
Stream mapping:
Stream #0:1 (g726) -> amerge:in0
Stream #1:1 (g726) -> amerge:in1
amerge -> Stream #0:0 (libmp3lame)
Press [q] to stop, [?] for help
[rtsp # 0x2f71640] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[rtsp # 0x300fb40] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[rtsp # 0x2f71640] max delay reached. need to consume packet
[NULL # 0x2f744a0] RTP: missed 18 packets
[flv # 0x3058b00] Failed to update header with correct duration.
[flv # 0x3058b00] Failed to update header with correct filesize.
size= 39kB time=00:00:04.54 bitrate= 70.2kbits/s
video:0kB audio:36kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 8.330614%
Thanx.
UPDATE 30 oct 2015: I found interesting detail when connecting 2 cameras (they have different microphones and I hear the difference between them): the order of "Hello"'s from different cams depends on the ORDER OF INPUTS.
with command
ffmpeg -i rtsp://cam2 -i rtsp://cam1 -map 0:a -map 1:a -filter_complex amix=inputs=2:duration=longest:dropout_transition=0 -vn -ar 22050 -f flv rtmp://172.22.45.38:1935/live/stream1
I hear "hello" from 1st cam and then in 1 second "hello" from 2nd cam.
with command
ffmpeg -i rtsp://cam1 -i rtsp://cam2 -map 0:a -map 1:a -filter_complex amix=inputs=2:duration=longest:dropout_transition=0 -vn -ar 22050 -f flv rtmp://172.22.45.38:1935/live/stream1
I hear "hello" from 2nd cam and then in 1 second "hello" from 1st cam.
So, As I understand - ffmpeg takes inputs not simaltaneousely, but in the order of inputs given.
Question: how to tell ffmpeg to read inputs simaltaneousely?
If using amix with two local file work perfectly, you can't make to play two audio at once.
When the input is from local file or streaming, ffmpeg know exactly its start time. so it can be mixed into one audio.
But when the input is from Live streaming, ffmpeg doesn't know exactly "when it starts" so start time should be different in different streaming URL.
More Important thing is that ffmpeg does not support concurrency when handle input. that's why order of "hello" depends on order of inputs.
I just know one solution to solve this. Adobe FMLE(Flash Media Live Encoder), which support time code when using RTMP streaming. You can get time code from both live streaming, then you can finally mix two audio into one.
Perhaps, you can start with this article : http://www.overdigital.com/2013/03/25/3ways-to-sync-data/
Try
ffmpeg -i rtsp://user:pass#172.22.5.202 -i rtsp://user:pass#172.22.5.202 \
-filter_complex \
"[0:a]asetpts=PTS-STARTPTS[a1];[1:a]asetpts=PTS-STARTPTS[a2]; \
[a1][a2]amix=inputs=2:duration=first:dropout_transition=3[a] \
-map [a] -ar 22050 -vn -f flv rtmp://172.22.45.38:1935/live/stream1
Related
Can someone please tell me the proper syntax for combining the following filters into a single command? I can't seem to figure it out.
The following command is being used to scale the video.
ffmpeg -i input.mp4 -c:v libx264 -preset medium -crf 23 -vf scale="'if(gt(a,4/3),1280,-1)':'if(gt(a,4/3),-1,720)'" -movflags +faststart output.mp4 2>&1
Then, I use the following code to scale and apply the watermark.
ffmpeg -i input.mp4 -vf "movie=logo.png, scale=200:-1 [wm]; [in][wm] overlay=5:main_h-overlay_h-5 [out]" output.mp4 2>&1
They work fine independently but every attempt I've made to combine the filter commands has been unsuccessful. Some assistance would be appreciated.
LOG
# ffmpeg -i /var/www/html/site/public_html/media/input.mp4 \
> -i /var/www/html/site/public_html/media/logo.png \
> -c:v libx264 -preset medium -crf 23 -filter_complex "[0]scale='if(gt(a,4/3),1280,-1)':'if(gt(a,4/3),-1,720)'[main];[1]scale=200:-1[wm];[main][wm]overlay=5:main_h-overlay_h-5" -movflags +faststart \
> /var/www/html/site/public_html/media/output.mp4
ffmpeg version git-2015-04-08-b926f02 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-11)
configuration: --prefix=/root/ffmpeg_build --extra-cflags=-I/root/ffmpeg_build/include --extra-ldflags=-L/root/ffmpeg_build/lib --bindir=/usr/bin --enable-gpl --enable-nonfree --enable-libfdk_aac --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libfreetype --enable-libtheora
libavutil 54. 22.101 / 54. 22.101
libavcodec 56. 34.100 / 56. 34.100
libavformat 56. 30.100 / 56. 30.100
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 13.101 / 5. 13.101
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/var/www/html/site/public_html/media/input.mp4':
Metadata:
major_brand : qt
minor_version : 0
compatible_brands: qt
creation_time : 2021-10-17 22:26:58
Duration: 00:00:02.40, start: 0.047891, bitrate: 2614 kb/s
Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 81 kb/s (default)
Metadata:
creation_time : 2021-10-17 22:26:58
handler_name : Core Media Data Handler
Stream #0:1(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 716x1280, 2521 kb/s, 30 fps, 30 tbr, 600 tbn, 1200 tbc (default)
Metadata:
creation_time : 2021-10-17 22:26:58
handler_name : Core Media Data Handler
encoder : H.264
Input #1, png_pipe, from '/var/www/html/site/public_html/media/logo.png':
Duration: N/A, bitrate: N/A
Stream #1:0: Video: png, rgba, 468x100, 25 tbr, 25 tbn, 25 tbc
File '/var/www/html/site/public_html/media/output.mp4' already exists. Overwrite ? [y/N] y
[libx264 # 0x30008a0] width not divisible by 2 (403x720)
Output #0, mp4, to '/var/www/html/site/public_html/media/output.mp4':
Metadata:
major_brand : qt
minor_version : 0
compatible_brands: qt
Stream #0:0: Video: h264, none, q=2-31, 128 kb/s, 30 fps (default)
Metadata:
encoder : Lavc56.34.100 libx264
Stream #0:1(und): Audio: aac, 0 channels, 128 kb/s (default)
Metadata:
creation_time : 2021-10-17 22:26:58
handler_name : Core Media Data Handler
encoder : Lavc56.34.100 libfdk_aac
Stream mapping:
Stream #0:1 (h264) -> scale (graph 0)
Stream #1:0 (png) -> scale (graph 0)
overlay (graph 0) -> Stream #0:0 (libx264)
Stream #0:0 -> #0:1 (aac (native) -> aac (libfdk_aac))
Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
Combined command:
ffmpeg -i input.mp4 -i logo.png -c:v libx264 -preset medium -crf 23 -filter_complex "[0]scale='if(gt(a,4/3),1280,-2)':'if(gt(a,4/3),-2,720)'[main];[1]scale=200:-1[wm];[main][wm]overlay=5:main_h-overlay_h-5" -c:a copy -movflags +faststart output.mp4
See FFmpeg Filtering Intro.
I'm trying to stream a webcam stream froma RaspberryPi-B to Youtube. The webcam used is a Logitech C920. If I use the h264 stream from the camera itself, it works fine using
ffmpeg -f alsa -i hw:1,0 -f v4l2 -vcodec h264 -video_size 854x480 -r 25 -i /dev/video0 -acodec aac -b:a 64000 -ar 48000 -bufsize 64k -b:v 1200k -bufsize 1024k -maxrate 1800k -vcodec copy -g 60 -r 30 -f flv
rtmp://a.rtmp.youtube.com/live2/stream_here
So, for this to work with other non h264 cameras like the Pi Cam or any other cheaper webcam, it needs to work with the raw stream and get converted to h264 using libx264. This is the whole point of using the Pi. Hence the second command set.
ffmpeg -f alsa -ac 2 -i hw:1,0 -f v4l2 -i /dev/video0 -framerate 25 -video_size 1280x720 -c:v libx264 -preset veryfast -maxrate 1984k -bufsize 3968k -vf "format=yuv420p" -g 60 -c:a aac -b:a 128k -ar 44100 -f flv rtmp://a.rtmp.youtube.com/live2/stream_name
So this results in the following issue.
ffmpeg version git-2017-03-03-68ee800 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.9.2 (Raspbian 4.9.2-10)
configuration: --arch=armhf --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree --extra-libs=-lasound --enable-pthreads
libavutil 55. 47.101 / 55. 47.101
libavcodec 57. 82.100 / 57. 82.100
libavformat 57. 66.103 / 57. 66.103
libavdevice 57. 3.100 / 57. 3.100
libavfilter 6. 74.100 / 6. 74.100
libswscale 4. 3.101 / 4. 3.101
libswresample 2. 4.100 / 2. 4.100
libpostproc 54. 2.100 / 54. 2.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, alsa, from 'hw:1,0':
Duration: N/A, start: 1488650966.446293, bitrate: 1024 kb/s
Stream #0:0: Audio: pcm_s16le, 32000 Hz, stereo, s16, 1024 kb/s
Input #1, video4linux2,v4l2, from '/dev/video0':
Duration: N/A, start: 2227.042654, bitrate: 159252 kb/s
Stream #1:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 864x480, 159252 kb/s, 24 fps, 24 tbr, 1000k tbn, 1000k tbc
Stream mapping:
Stream #1:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
Stream #0:0 -> #0:1 (pcm_s16le (native) -> aac (native))
Press [q] to stop, [?] for help
[video4linux2,v4l2 # 0x2e42310] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
Illegal instruction
pi#raspberrypi:~ $
If I add
thread_queue_size 512
I end up with
pi#raspberrypi:~ $ ffmpeg -f alsa -ac 2 -i hw:1,0 -f v4l2 -thread_queue_size 512 -i /dev/video0 -framerate 25 -video_size 1280x720 -c:v libx264 -preset veryfast -maxrate 1984k -bufsize 3968k -vf "format=yuv420p" -g 60 -c:a aac -b:a 128k -ar 44100 -f flv rtmp://a.rtmp.youtube.com/live2/zqg7-98wy-60b6-f2yx
ffmpeg version git-2017-03-03-68ee800 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.9.2 (Raspbian 4.9.2-10)
configuration: --arch=armhf --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree --extra-libs=-lasound --enable-pthreads
libavutil 55. 47.101 / 55. 47.101
libavcodec 57. 82.100 / 57. 82.100
libavformat 57. 66.103 / 57. 66.103
libavdevice 57. 3.100 / 57. 3.100
libavfilter 6. 74.100 / 6. 74.100
libswscale 4. 3.101 / 4. 3.101
libswresample 2. 4.100 / 2. 4.100
libpostproc 54. 2.100 / 54. 2.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, alsa, from 'hw:1,0':
Duration: N/A, start: 1488651430.836193, bitrate: 1024 kb/s
Stream #0:0: Audio: pcm_s16le, 32000 Hz, stereo, s16, 1024 kb/s
Input #1, video4linux2,v4l2, from '/dev/video0':
Duration: N/A, start: 2691.407641, bitrate: 159252 kb/s
Stream #1:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 864x480, 159252 kb/s, 24 fps, 24 tbr, 1000k tbn, 1000k tbc
Stream mapping:
Stream #1:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
Stream #0:0 -> #0:1 (pcm_s16le (native) -> aac (native))
Press [q] to stop, [?] for help
Illegal instruction
pi#raspberrypi:~ $
Where exactly does
thread_queue_size
Belong?
Notes:
ffmpeg was built using this reference
–extra-libs=-lasound & --enable=pthreads was used
I totally forgot that the RaspberryPI had hardware support for H264.
So I followed this tutorial and we're live on 30% CPU usage with realtime streaming on the Raspberry Pi B (ARMV6).
I am developing an application.
People upload videos from their mobile, from other places.
Using a CMS in PHP (it is the language with which the application is developed) I need to generate a unique video with these partial uploads.
Through FFmpeg I am doing tests, from the command line:
ffmpeg -i concat:IMG_1916.mp4\|IMG_1917.mp4 -c copy videoLoop.mp4
This code when I run it says:
ffmpeg version 3.2.4 Copyright (c) 2000-2017 the FFmpeg developers
built with Apple LLVM version 8.0.0 (clang-800.0.42.1)
configuration: --prefix=/usr/local/Cellar/ffmpeg/3.2.4 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-vda
libavutil 55. 34.101 / 55. 34.101
libavcodec 57. 64.101 / 57. 64.101
libavformat 57. 56.101 / 57. 56.101
libavdevice 57. 1.100 / 57. 1.100
libavfilter 6. 65.100 / 6. 65.100
libavresample 3. 1. 0 / 3. 1. 0
libswscale 4. 2.100 / 4. 2.100
libswresample 2. 3.100 / 2. 3.100
libpostproc 54. 1.100 / 54. 1.100
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7f8515000000] Found duplicated MOOV Atom. Skipped it Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'concat:IMG_1916.mp4|IMG_1917.mp4':
Metadata:
encoder : Lavf57.66.102
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
Duration: 00:00:04.27, start: 0.000000, bitrate: 26792 kb/s
Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 1920x1080, 11978 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 120 kb/s (default)
Metadata:
handler_name : SoundHandler
Output #0, mp4, to 'videoLoop.mp4':
Metadata:
compatible_brands: isomiso2avc1mp41
major_brand : isom
minor_version : 512
encoder : Lavf57.56.101
Stream #0:0(und): Video: h264 (Constrained Baseline) ([33][0][0][0] / 0x0021), yuv420p, 1920x1080, q=2-31, 11978 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 30k tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) ([64][0][0][0] / 0x0040), 44100 Hz, mono, 120 kb/s (default)
Metadata:
handler_name : SoundHandler
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
frame= 127 fps=0.0 q=-1.0 Lsize= 6264kB time=00:00:04.22 bitrate=12142.8kbits/s speed= 376x
video:6196kB audio:63kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.076698%
This execution generates a video, but not concatenated with the 2 specified, only with the first one.
Why not join the 2?
The videos to upload, will be of very different formats so I can not define codec.
You will have to make all inputs similar before concatenation, then use the concat filter. A rough example (you will of course have to customize it to your needs):
ffmpeg -i input0 -i input1 -filter_complex \
"[0:v]fps=25,scale=1280:720,format=yuv420p,setsar=1,setpts=PTS-STARTPTS[v0]; \
[1:v]fps=25,scale=1280:720,format=yuv420p,setsar=1,setpts=PTS-STARTPTS[v1]; \
[0:a]aformat=channel_layouts=stereo:sample_rates=44100,asetpts=PTS-STARTPTS[a0]; \
[1:a]aformat=channel_layouts=stereo:sample_rates=44100,asetpts=PTS-STARTPTS[a1]; \
[v0][a0][v1][a1]concat=n=2:v=1:a=1[v][a]" \
-map "[v]" -map "[a]" -c:v libx264 -c:a aac -movflags +faststart output.mp4
Using this adaptation of code, i can generate a video with two sources.
ffmpeg -i IMG_1916.mp4 -i IMG_1917.mp4 \
-filter_complex \
"[0:v:0] [0:a:0] \
[1:v:0] [1:a:0] \
concat=n=2:v=1:a=1 [v] [a]" \
-map "[v]" -map "[a]" videoLoop.mp4
I'm not sure if I can concatenate any video format, from any device of any source / format with this code.
I am trying to take direct video output from a 4k Sony Handycam, via HDMI directly into a Blackmagic Intensity Pro 4K. I can verify that the camera, Hdmi and blackmagic card are working as I can capture and view video using the provided "Media Express" program. When use ffmpeg I do get video output but I also get a buffer overrun.
Here is the command:
time ffmpeg -f decklink -i "Intensity Pro 4K#20" -c:v nvenc -b:v 100M -vf yadif=0:-1:0" -pix_fmt yuv420p -crf 29.97 -strict -2 output.mp4
And I get the following output:
ffmpeg version N-76538-gb83c849 Copyright (c) 2000-2015 the FFmpeg
developers built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
configuration: --enable-nonfree --enable-nvenc --enable-nvresize --extra-cflags=-I../cudautils --extra-ldflags=-L../cudautils --enable-gpl --enable-libx264 --enable-libx265 --enable-decklink --extra-cflags=-I/home/tristan/Downloads/BlackmagicDeckLinkSDK10.6.5/Linux/include --extra-ldflags=-L/home/tristan/Downloads/BlackmagicDeckLinkSDK10.6.5/Linux/include
libavutil 55. 5.100 / 55. 5.100
libavcodec 57. 15.100 / 57. 15.100
libavformat 57. 14.100 / 57. 14.100
libavdevice 57. 0.100 / 57. 0.100
libavfilter 6. 15.100 / 6. 15.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
[decklink # 0x1ccd6e0] Found Decklink mode 3840 x 2160 with rate 29.97
[decklink # 0x1ccd6e0] Stream #1: not enough frames to estimate rate; consider increasing probesize
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, decklink, from 'Intensity Pro 4K#20':
Duration: N/A, start: 0.000000, bitrate: 1536 kb/s
Stream #0:0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s
Stream #0:1: Video: rawvideo (UYVY / 0x59565955), uyvy422, 3840x2160, -5 kb/s, 29.97 tbr, 1000k tbn, 29.97 tbc
Codec AVOption crf (Select the quality for constant quality mode) specified for output file #0 (output.mp4) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
File 'output.mp4' already exists. Overwrite ? [y/N] y
Output #0, mp4, to 'output.mp4':
Metadata:
encoder : Lavf57.14.100
Stream #0:0: Video: h264 (nvenc) ([33][0][0][0] / 0x0021), yuv420p, 3840x2160, q=-1--1, 100000 kb/s, 29.97 fps, 30k tbn, 29.97 tbc
Metadata:
encoder : Lavc57.15.100 nvenc
Stream #0:1: Audio: aac ([64][0][0][0] / 0x0040), 48000 Hz, stereo, fltp, 128 kb/s
Metadata:
encoder : Lavc57.15.100 aac
Stream mapping:
Stream #0:1 -> #0:0 (rawvideo (native) -> h264 (nvenc))
Stream #0:0 -> #0:1 (pcm_s16le (native) -> aac (native))
Press [q] to stop, [?] for help
[decklink # 0x1ccd6e0] Decklink input buffer overrun!:03.15 bitrate=70411.7kbits/s
Last message repeated 1 times
[decklink # 0x1ccd6e0] Decklink input buffer overrun!:03.54 bitrate=73110.9kbits/s
Last message repeated 20 times
[decklink # 0x1ccd6e0] Decklink input buffer overrun!:03.92 bitrate=76270.2kbits/s
Last message repeated 15 times
[decklink # 0x1ccd6e0] Decklink input buffer overrun!:04.28 bitrate=78367.6kbits/s
Last message repeated 61 times
frame= 140 fps= 22 q=-0.0 Lsize= 57266kB time=00:00:04.67 bitrate=100425.2kbits/s
video:57187kB audio:72kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.009844%
[decklink # 0x1ccd6e0] Decklink input buffer overrun!
Last message repeated 7 times
[aac # 0x1cd7020] Qavg: 215.556
real 0m8.808s
user 0m5.785s
sys 0m1.749s
Some sort of insight into this, be that just some commands that may fix it the issue, or otherwise.
I convert AVI to FLV with ffmpeg using -sameq parameter (same quality):
ffmpeg -i test.avi -sameq -f flv sameq.flv
The resulting file has the same video and audio quality as the original, but it's more than twice the original file size:
84M sameq.flv
41M test.avi
Why does it happen?
Transcoder output:
ffmpeg version N-34750-g070d2d7, Copyright (c) 2000-2011 the FFmpeg developers
built on Nov 12 2011 11:23:07 with gcc 4.6.1
configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-x11grab
libavutil 51. 24. 1 / 51. 24. 1
libavcodec 53. 33. 0 / 53. 33. 0
libavformat 53. 20. 0 / 53. 20. 0
libavdevice 53. 4. 0 / 53. 4. 0
libavfilter 2. 48. 0 / 2. 48. 0
libswscale 2. 1. 0 / 2. 1. 0
libpostproc 51. 2. 0 / 51. 2. 0
Input #0, avi, from 'test.avi':
Duration: 00:06:30.00, start: 0.000000, bitrate: 866 kb/s
Stream #0:0: Video: mpeg4 (Advanced Real Time Simple Profile) (DIVX / 0x58564944), yuv420p, 400x300 [SAR 1:1 DAR 4:3], 25 tbr, 25 tbn, 25 tbc
Stream #0:1: Audio: mp3 (U[0][0][0] / 0x0055), 44100 Hz, mono, s16, 64 kb/s
[buffer # 0xa247ae0] w:400 h:300 pixfmt:yuv420p tb:1/1000000 sar:1/1 sws_param:
Output #0, flv, to 'sameq.flv':
Metadata:
encoder : Lavf53.20.0
Stream #0:0: Video: flv1 ([2][0][0][0] / 0x0002), yuv420p, 400x300 [SAR 1:1 DAR 4:3], q=2-31, 200 kb/s, 1k tbn, 25 tbc
Stream #0:1: Audio: mp3 ([2][0][0][0] / 0x0002), 44100 Hz, mono, s16, 128 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (mpeg4 -> flv)
Stream #0:1 -> #0:1 (mp3 -> libmp3lame)
Press [q] to stop, [?] for help
frame= 9742 fps=255 q=0.0 Lsize= 85074kB time=00:06:30.00 bitrate=1787.0kbits/s
video:79163kB audio:5525kB global headers:0kB muxing overhead 0.455568%
Two thing comes to mind:
Compress a video without audio stream to eliminate the audio portion of this issue. BTW, the audio source is HALF the bitrate of the output, that increases the size a little. Use -ar and -ab switches to control the output.
Check out this article on qscale vs quality using -qscale option. Add in the -b (bitrate) and -s (size) and tweak it to your needs.
When all fails, there are a few switches you can try from the ffmpeg website or try using the new H.264 compression, the two pass option is recommended. Have fun compressing
its because of -sameq. It gives you a good quality but pay the price with a bigger file size.
Can you try adding:
-qcomp 1.0
video quantizer scale compression ( VBR ) (default 0.5). Constant of ratecontrol equation. Recommended range for default rc_eq: 0.0-1.0