ffmpeg add semi transparent watermark(png) with different size - filter

I'm trying to add a png watermark (with alpha channel) over h264 video with semi transparent. By using overlay filter I managed to add watermark to the video.
ffmpeg -y -i input.mp4 -i watermark.png -filter_complex "[0][1] overlay=0:0" -c:v libx264 -an output.mp4
But overlay filter does not provide transparent option. So I tried to use blend filter. However, when I use origin resolution, error message comes out.
ffmpeg -y -i input.mp4 -i watermark.png -filter_complex "[0][1]blend=all_mode=overlay:all_opacity=0.3" -c:v libx264 -an output.mp4
Output:
libavutil 55. 28.100 / 55. 28.100
libavcodec 57. 48.101 / 57. 48.101
libavformat 57. 41.100 / 57. 41.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 47.100 / 6. 47.100
libavresample 3. 0. 0 / 3. 0. 0
libswscale 4. 1.100 / 4. 1.100
libswresample 2. 1.100 / 2. 1.100
libpostproc 54. 0.100 / 54. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.41.100
Duration: 00:00:45.08, start: 0.000000, bitrate: 1872 kb/s
Stream #0:0(und): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p, 1920x1080, 1869 kb/s, 29.72 fps, 30 tbr, 16k tbn, 32k tbc (default)
Metadata:
handler_name : VideoHandler
Input #1, png_pipe, from 'watermark.png':
Duration: N/A, bitrate: N/A
Stream #1:0: Video: png, rgba(pc), 64x64, 25 tbr, 25 tbn, 25 tbc
[Parsed_blend_0 # 00750600] First input link top parameters (size 1920x1080, SAR 0:1) do not match the corresponding second input link bottom parameters (64x64, SAR 0:1)
[Parsed_blend_0 # 00750600] Failed to configure output pad on Parsed_blend_0
Error configuring complex filters.
Invalid argument
The result looks like some resolution issue with the parameters. So that I tried to scale the watermark before blending.
ffmpeg -y -i input.mp4 -i watermark.png -filter_complex "[0:0]scale=1920x1080[a]; [1:0]scale=1920x1080[b]; [a][b]blend=all_mode=overlay:all_opacity=0.3" -c:v libx264 -an output.mp4
FFMPEG works with these parameters. But the output wasn't I expected, because watermark had been stretched.
Any idea to blend watermark with different resolution without stretching to video with transparency?
Here are the testing file. (ffmpeg version 3.1.2)
https://drive.google.com/open?id=0B2X3VLS01TogdHVJZ2I1ZC1GUUU
https://drive.google.com/open?id=0B2X3VLS01TogbjhuZTlBOFFpN1k

Use the lut filter before overlay
ffmpeg -y -i input.mp4 -i watermark.png -filter_complex
"[1]lut=a=val*0.3[a];[0][a]overlay=0:0"
-c:v libx264 -an output.mp4

Logo scaled relative to image size v1 (centered)
ffmpeg -y \
-i "bird.jpg" \
-i "logo.png" \
-filter_complex "\
[1][0]scale2ref=h=ow/mdar:w=iw/4[#A logo][bird];\
[#A logo]format=argb,colorchannelmixer=aa=0.5[#B logo transparent];\
[bird][#B logo transparent]overlay\
=(main_w-w)/2:(main_h-h)/2" \
image_with_logo.jpg
#A preserve aspect ratio of logo, scale width of logo to 1/4 of image width
#B add alpha channel, reduce opacity to 50%
# overlay image an logo
#position is horizontally and vertically centered
Logo scaled relative to image size v2 (bottom right with margin)
ffmpeg -y \
-i "bird.jpg" \
-i "logo.png" \
-filter_complex "\
[1][0]scale2ref=h=ow/mdar:w=iw/4[#A logo][bird];\
[#A logo]format=argb,colorchannelmixer=aa=0.5[#B logo transparent];\
[bird][#B logo transparent]overlay\
=(main_w-w)-(main_w*0.1):(main_h-h)-(main_h*0.1)" \
image_with_logo.jpg
#A preserve aspect ratio of logo, scale width of logo to 1/4 of image width
#B add alpha channel, reduce opacity to 50%
# overlay image an logo
# position: bottom right, with a margin of 10% of the edges

Related

FFMPEG Multiple Filters (Scale Video + Scale / Apply Watermark)

Can someone please tell me the proper syntax for combining the following filters into a single command? I can't seem to figure it out.
The following command is being used to scale the video.
ffmpeg -i input.mp4 -c:v libx264 -preset medium -crf 23 -vf scale="'if(gt(a,4/3),1280,-1)':'if(gt(a,4/3),-1,720)'" -movflags +faststart output.mp4 2>&1
Then, I use the following code to scale and apply the watermark.
ffmpeg -i input.mp4 -vf "movie=logo.png, scale=200:-1 [wm]; [in][wm] overlay=5:main_h-overlay_h-5 [out]" output.mp4 2>&1
They work fine independently but every attempt I've made to combine the filter commands has been unsuccessful. Some assistance would be appreciated.
LOG
# ffmpeg -i /var/www/html/site/public_html/media/input.mp4 \
> -i /var/www/html/site/public_html/media/logo.png \
> -c:v libx264 -preset medium -crf 23 -filter_complex "[0]scale='if(gt(a,4/3),1280,-1)':'if(gt(a,4/3),-1,720)'[main];[1]scale=200:-1[wm];[main][wm]overlay=5:main_h-overlay_h-5" -movflags +faststart \
> /var/www/html/site/public_html/media/output.mp4
ffmpeg version git-2015-04-08-b926f02 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-11)
configuration: --prefix=/root/ffmpeg_build --extra-cflags=-I/root/ffmpeg_build/include --extra-ldflags=-L/root/ffmpeg_build/lib --bindir=/usr/bin --enable-gpl --enable-nonfree --enable-libfdk_aac --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libfreetype --enable-libtheora
libavutil 54. 22.101 / 54. 22.101
libavcodec 56. 34.100 / 56. 34.100
libavformat 56. 30.100 / 56. 30.100
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 13.101 / 5. 13.101
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/var/www/html/site/public_html/media/input.mp4':
Metadata:
major_brand : qt
minor_version : 0
compatible_brands: qt
creation_time : 2021-10-17 22:26:58
Duration: 00:00:02.40, start: 0.047891, bitrate: 2614 kb/s
Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 81 kb/s (default)
Metadata:
creation_time : 2021-10-17 22:26:58
handler_name : Core Media Data Handler
Stream #0:1(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 716x1280, 2521 kb/s, 30 fps, 30 tbr, 600 tbn, 1200 tbc (default)
Metadata:
creation_time : 2021-10-17 22:26:58
handler_name : Core Media Data Handler
encoder : H.264
Input #1, png_pipe, from '/var/www/html/site/public_html/media/logo.png':
Duration: N/A, bitrate: N/A
Stream #1:0: Video: png, rgba, 468x100, 25 tbr, 25 tbn, 25 tbc
File '/var/www/html/site/public_html/media/output.mp4' already exists. Overwrite ? [y/N] y
[libx264 # 0x30008a0] width not divisible by 2 (403x720)
Output #0, mp4, to '/var/www/html/site/public_html/media/output.mp4':
Metadata:
major_brand : qt
minor_version : 0
compatible_brands: qt
Stream #0:0: Video: h264, none, q=2-31, 128 kb/s, 30 fps (default)
Metadata:
encoder : Lavc56.34.100 libx264
Stream #0:1(und): Audio: aac, 0 channels, 128 kb/s (default)
Metadata:
creation_time : 2021-10-17 22:26:58
handler_name : Core Media Data Handler
encoder : Lavc56.34.100 libfdk_aac
Stream mapping:
Stream #0:1 (h264) -> scale (graph 0)
Stream #1:0 (png) -> scale (graph 0)
overlay (graph 0) -> Stream #0:0 (libx264)
Stream #0:0 -> #0:1 (aac (native) -> aac (libfdk_aac))
Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
Combined command:
ffmpeg -i input.mp4 -i logo.png -c:v libx264 -preset medium -crf 23 -filter_complex "[0]scale='if(gt(a,4/3),1280,-2)':'if(gt(a,4/3),-2,720)'[main];[1]scale=200:-1[wm];[main][wm]overlay=5:main_h-overlay_h-5" -c:a copy -movflags +faststart output.mp4
See FFmpeg Filtering Intro.

thread_queue_size during Live Streaming from ffmpeg

I'm trying to stream a webcam stream froma RaspberryPi-B to Youtube. The webcam used is a Logitech C920. If I use the h264 stream from the camera itself, it works fine using
ffmpeg -f alsa -i hw:1,0 -f v4l2 -vcodec h264 -video_size 854x480 -r 25 -i /dev/video0 -acodec aac -b:a 64000 -ar 48000 -bufsize 64k -b:v 1200k -bufsize 1024k -maxrate 1800k -vcodec copy -g 60 -r 30 -f flv
rtmp://a.rtmp.youtube.com/live2/stream_here
So, for this to work with other non h264 cameras like the Pi Cam or any other cheaper webcam, it needs to work with the raw stream and get converted to h264 using libx264. This is the whole point of using the Pi. Hence the second command set.
ffmpeg -f alsa -ac 2 -i hw:1,0 -f v4l2 -i /dev/video0 -framerate 25 -video_size 1280x720 -c:v libx264 -preset veryfast -maxrate 1984k -bufsize 3968k -vf "format=yuv420p" -g 60 -c:a aac -b:a 128k -ar 44100 -f flv rtmp://a.rtmp.youtube.com/live2/stream_name
So this results in the following issue.
ffmpeg version git-2017-03-03-68ee800 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.9.2 (Raspbian 4.9.2-10)
configuration: --arch=armhf --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree --extra-libs=-lasound --enable-pthreads
libavutil 55. 47.101 / 55. 47.101
libavcodec 57. 82.100 / 57. 82.100
libavformat 57. 66.103 / 57. 66.103
libavdevice 57. 3.100 / 57. 3.100
libavfilter 6. 74.100 / 6. 74.100
libswscale 4. 3.101 / 4. 3.101
libswresample 2. 4.100 / 2. 4.100
libpostproc 54. 2.100 / 54. 2.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, alsa, from 'hw:1,0':
Duration: N/A, start: 1488650966.446293, bitrate: 1024 kb/s
Stream #0:0: Audio: pcm_s16le, 32000 Hz, stereo, s16, 1024 kb/s
Input #1, video4linux2,v4l2, from '/dev/video0':
Duration: N/A, start: 2227.042654, bitrate: 159252 kb/s
Stream #1:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 864x480, 159252 kb/s, 24 fps, 24 tbr, 1000k tbn, 1000k tbc
Stream mapping:
Stream #1:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
Stream #0:0 -> #0:1 (pcm_s16le (native) -> aac (native))
Press [q] to stop, [?] for help
[video4linux2,v4l2 # 0x2e42310] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
Illegal instruction
pi#raspberrypi:~ $
If I add
thread_queue_size 512
I end up with
pi#raspberrypi:~ $ ffmpeg -f alsa -ac 2 -i hw:1,0 -f v4l2 -thread_queue_size 512 -i /dev/video0 -framerate 25 -video_size 1280x720 -c:v libx264 -preset veryfast -maxrate 1984k -bufsize 3968k -vf "format=yuv420p" -g 60 -c:a aac -b:a 128k -ar 44100 -f flv rtmp://a.rtmp.youtube.com/live2/zqg7-98wy-60b6-f2yx
ffmpeg version git-2017-03-03-68ee800 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.9.2 (Raspbian 4.9.2-10)
configuration: --arch=armhf --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree --extra-libs=-lasound --enable-pthreads
libavutil 55. 47.101 / 55. 47.101
libavcodec 57. 82.100 / 57. 82.100
libavformat 57. 66.103 / 57. 66.103
libavdevice 57. 3.100 / 57. 3.100
libavfilter 6. 74.100 / 6. 74.100
libswscale 4. 3.101 / 4. 3.101
libswresample 2. 4.100 / 2. 4.100
libpostproc 54. 2.100 / 54. 2.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, alsa, from 'hw:1,0':
Duration: N/A, start: 1488651430.836193, bitrate: 1024 kb/s
Stream #0:0: Audio: pcm_s16le, 32000 Hz, stereo, s16, 1024 kb/s
Input #1, video4linux2,v4l2, from '/dev/video0':
Duration: N/A, start: 2691.407641, bitrate: 159252 kb/s
Stream #1:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 864x480, 159252 kb/s, 24 fps, 24 tbr, 1000k tbn, 1000k tbc
Stream mapping:
Stream #1:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
Stream #0:0 -> #0:1 (pcm_s16le (native) -> aac (native))
Press [q] to stop, [?] for help
Illegal instruction
pi#raspberrypi:~ $
Where exactly does
thread_queue_size
Belong?
Notes:
ffmpeg was built using this reference
–extra-libs=-lasound & --enable=pthreads was used
I totally forgot that the RaspberryPI had hardware support for H264.
So I followed this tutorial and we're live on 30% CPU usage with realtime streaming on the Raspberry Pi B (ARMV6).

FFmpeg - Concatenate videos with not know format

I am developing an application.
People upload videos from their mobile, from other places.
Using a CMS in PHP (it is the language with which the application is developed) I need to generate a unique video with these partial uploads.
Through FFmpeg I am doing tests, from the command line:
ffmpeg -i concat:IMG_1916.mp4\|IMG_1917.mp4 -c copy videoLoop.mp4
This code when I run it says:
ffmpeg version 3.2.4 Copyright (c) 2000-2017 the FFmpeg developers
built with Apple LLVM version 8.0.0 (clang-800.0.42.1)
configuration: --prefix=/usr/local/Cellar/ffmpeg/3.2.4 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-vda
libavutil 55. 34.101 / 55. 34.101
libavcodec 57. 64.101 / 57. 64.101
libavformat 57. 56.101 / 57. 56.101
libavdevice 57. 1.100 / 57. 1.100
libavfilter 6. 65.100 / 6. 65.100
libavresample 3. 1. 0 / 3. 1. 0
libswscale 4. 2.100 / 4. 2.100
libswresample 2. 3.100 / 2. 3.100
libpostproc 54. 1.100 / 54. 1.100
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7f8515000000] Found duplicated MOOV Atom. Skipped it Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'concat:IMG_1916.mp4|IMG_1917.mp4':
Metadata:
encoder : Lavf57.66.102
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
Duration: 00:00:04.27, start: 0.000000, bitrate: 26792 kb/s
Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 1920x1080, 11978 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 120 kb/s (default)
Metadata:
handler_name : SoundHandler
Output #0, mp4, to 'videoLoop.mp4':
Metadata:
compatible_brands: isomiso2avc1mp41
major_brand : isom
minor_version : 512
encoder : Lavf57.56.101
Stream #0:0(und): Video: h264 (Constrained Baseline) ([33][0][0][0] / 0x0021), yuv420p, 1920x1080, q=2-31, 11978 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 30k tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) ([64][0][0][0] / 0x0040), 44100 Hz, mono, 120 kb/s (default)
Metadata:
handler_name : SoundHandler
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
frame= 127 fps=0.0 q=-1.0 Lsize= 6264kB time=00:00:04.22 bitrate=12142.8kbits/s speed= 376x
video:6196kB audio:63kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.076698%
This execution generates a video, but not concatenated with the 2 specified, only with the first one.
Why not join the 2?
The videos to upload, will be of very different formats so I can not define codec.
You will have to make all inputs similar before concatenation, then use the concat filter. A rough example (you will of course have to customize it to your needs):
ffmpeg -i input0 -i input1 -filter_complex \
"[0:v]fps=25,scale=1280:720,format=yuv420p,setsar=1,setpts=PTS-STARTPTS[v0]; \
[1:v]fps=25,scale=1280:720,format=yuv420p,setsar=1,setpts=PTS-STARTPTS[v1]; \
[0:a]aformat=channel_layouts=stereo:sample_rates=44100,asetpts=PTS-STARTPTS[a0]; \
[1:a]aformat=channel_layouts=stereo:sample_rates=44100,asetpts=PTS-STARTPTS[a1]; \
[v0][a0][v1][a1]concat=n=2:v=1:a=1[v][a]" \
-map "[v]" -map "[a]" -c:v libx264 -c:a aac -movflags +faststart output.mp4
Using this adaptation of code, i can generate a video with two sources.
ffmpeg -i IMG_1916.mp4 -i IMG_1917.mp4 \
-filter_complex \
"[0:v:0] [0:a:0] \
[1:v:0] [1:a:0] \
concat=n=2:v=1:a=1 [v] [a]" \
-map "[v]" -map "[a]" videoLoop.mp4
I'm not sure if I can concatenate any video format, from any device of any source / format with this code.

ffmpeg amerge and amix filter delay

I need to take audio-streams from several IP cameras and merge them into one file, so that they would sound simaltaneousely.
I tried filter "amix": (for testing purposes I take audio-stream 2 times from the same camera. yes, I tried 2 cameras - result is the same)
ffmpeg -i rtsp://user:pass#172.22.5.202 -i rtsp://user:pass#172.22.5.202 -map 0:a -map 1:a -filter_complex amix=inputs=2:duration=first:dropout_transition=3 -ar 22050 -vn -f flv rtmp://172.22.45.38:1935/live/stream1
result: I say "hello". And hear in speakers the first "hello" and in 1 second I hear the second "hello". Instead of hearing two "hello"'s simaltaneousely.
and tried filter "amerge":
ffmpeg -i rtsp://user:pass#172.22.5.202 -i rtsp://user:pass#172.22.5.202 -map 0:a -map 1:a -filter_complex amerge -ar 22050 -vn -f flv rtmp://172.22.45.38:1935/live/stream1
result: the same as in the first example, but now I hear the first "hello" in left speaker and in 1 second I hear the second "hello" in right speaker, instead of hearing two "hello"'s in both speakers simaltaneousely.
So, the question is: how to make them sound simaltaneousely? May be you know some parameter? or some other command?
P.S. Here is ful command-line output for both variants if you need them:
amix:
[root#minjust ~]# ffmpeg -i rtsp://admin:12345#172.22.5.202 -i rtsp://admin:12345#172.22.5.202 -map 0:a -map 1:a -filter_complex amix=inputs=2:duration=longest:dropout_transition=0 -vn -ar 22050 -f flv rtmp://172.22.45.38:1935/live/stream1 ffmpeg version N-76031-g9099079 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-16)
configuration: --enable-gpl --enable-libx264 --enable-libmp3lame --enable-nonfree --enable-version3
libavutil 55. 4.100 / 55. 4.100
libavcodec 57. 6.100 / 57. 6.100
libavformat 57. 4.100 / 57. 4.100
libavdevice 57. 0.100 / 57. 0.100
libavfilter 6. 11.100 / 6. 11.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.100 / 2. 0.100
libpostproc 54. 0.100 / 54. 0.100
Input #0, rtsp, from 'rtsp://admin:12345#172.22.5.202':
Metadata:
title : Media Presentation
Duration: N/A, start: 0.032000, bitrate: N/A
Stream #0:0: Video: h264 (Baseline), yuv420p, 1280x720, 20 fps, 25 tbr, 90k tbn, 40 tbc
Stream #0:1: Audio: adpcm_g726, 8000 Hz, mono, s16, 16 kb/s
Stream #0:2: Data: none
Input #1, rtsp, from 'rtsp://admin:12345#172.22.5.202':
Metadata:
title : Media Presentation
Duration: N/A, start: 0.032000, bitrate: N/A
Stream #1:0: Video: h264 (Baseline), yuv420p, 1280x720, 20 fps, 25 tbr, 90k tbn, 40 tbc
Stream #1:1: Audio: adpcm_g726, 8000 Hz, mono, s16, 16 kb/s
Stream #1:2: Data: none
Output #0, flv, to 'rtmp://172.22.45.38:1935/live/stream1':
Metadata:
title : Media Presentation
encoder : Lavf57.4.100
Stream #0:0: Audio: mp3 (libmp3lame) ([2][0][0][0] / 0x0002), 22050 Hz, mono, fltp (default)
Metadata:
encoder : Lavc57.6.100 libmp3lame
Stream mapping:
Stream #0:1 (g726) -> amix:input0
Stream #1:1 (g726) -> amix:input1
amix -> Stream #0:0 (libmp3lame)
Press [q] to stop, [?] for help
[rtsp # 0x2689600] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[rtsp # 0x2727c60] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[rtsp # 0x2689600] max delay reached. need to consume packet
[NULL # 0x268c500] RTP: missed 38 packets
[rtsp # 0x2689600] max delay reached. need to consume packet
[NULL # 0x268d460] RTP: missed 4 packets
[flv # 0x2958360] Failed to update header with correct duration.
[flv # 0x2958360] Failed to update header with correct filesize.
size= 28kB time=00:00:06.18 bitrate= 36.7kbits/s
video:0kB audio:24kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 16.331224%
and amerge:
[root#minjust ~]# ffmpeg -i rtsp://admin:12345#172.22.5.202 -i rtsp://admin:12345#172.22.5.202 -map 0:a -map 1:a -filter_complex amerge -vn -ar 22050 -f flv rtmp://172.22.45.38:1935/live/stream1
ffmpeg version N-76031-g9099079 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-16)
configuration: --enable-gpl --enable-libx264 --enable-libmp3lame --enable-nonfree --enable-version3
libavutil 55. 4.100 / 55. 4.100
libavcodec 57. 6.100 / 57. 6.100
libavformat 57. 4.100 / 57. 4.100
libavdevice 57. 0.100 / 57. 0.100
libavfilter 6. 11.100 / 6. 11.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.100 / 2. 0.100
libpostproc 54. 0.100 / 54. 0.100
Input #0, rtsp, from 'rtsp://admin:12345#172.22.5.202':
Metadata:
title : Media Presentation
Duration: N/A, start: 0.064000, bitrate: N/A
Stream #0:0: Video: h264 (Baseline), yuv420p, 1280x720, 20 fps, 25 tbr, 90k tbn, 40 tbc
Stream #0:1: Audio: adpcm_g726, 8000 Hz, mono, s16, 16 kb/s
Stream #0:2: Data: none
Input #1, rtsp, from 'rtsp://admin:12345#172.22.5.202':
Metadata:
title : Media Presentation
Duration: N/A, start: 0.032000, bitrate: N/A
Stream #1:0: Video: h264 (Baseline), yuv420p, 1280x720, 20 fps, 25 tbr, 90k tbn, 40 tbc
Stream #1:1: Audio: adpcm_g726, 8000 Hz, mono, s16, 16 kb/s
Stream #1:2: Data: none
[Parsed_amerge_0 # 0x3069cc0] No channel layout for input 1
[Parsed_amerge_0 # 0x3069cc0] Input channel layouts overlap: output layout will be determined by the number of distinct input channels
Output #0, flv, to 'rtmp://172.22.45.38:1935/live/stream1':
Metadata:
title : Media Presentation
encoder : Lavf57.4.100
Stream #0:0: Audio: mp3 (libmp3lame) ([2][0][0][0] / 0x0002), 22050 Hz, stereo, s16p (default)
Metadata:
encoder : Lavc57.6.100 libmp3lame
Stream mapping:
Stream #0:1 (g726) -> amerge:in0
Stream #1:1 (g726) -> amerge:in1
amerge -> Stream #0:0 (libmp3lame)
Press [q] to stop, [?] for help
[rtsp # 0x2f71640] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[rtsp # 0x300fb40] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[rtsp # 0x2f71640] max delay reached. need to consume packet
[NULL # 0x2f744a0] RTP: missed 18 packets
[flv # 0x3058b00] Failed to update header with correct duration.
[flv # 0x3058b00] Failed to update header with correct filesize.
size= 39kB time=00:00:04.54 bitrate= 70.2kbits/s
video:0kB audio:36kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 8.330614%
Thanx.
UPDATE 30 oct 2015: I found interesting detail when connecting 2 cameras (they have different microphones and I hear the difference between them): the order of "Hello"'s from different cams depends on the ORDER OF INPUTS.
with command
ffmpeg -i rtsp://cam2 -i rtsp://cam1 -map 0:a -map 1:a -filter_complex amix=inputs=2:duration=longest:dropout_transition=0 -vn -ar 22050 -f flv rtmp://172.22.45.38:1935/live/stream1
I hear "hello" from 1st cam and then in 1 second "hello" from 2nd cam.
with command
ffmpeg -i rtsp://cam1 -i rtsp://cam2 -map 0:a -map 1:a -filter_complex amix=inputs=2:duration=longest:dropout_transition=0 -vn -ar 22050 -f flv rtmp://172.22.45.38:1935/live/stream1
I hear "hello" from 2nd cam and then in 1 second "hello" from 1st cam.
So, As I understand - ffmpeg takes inputs not simaltaneousely, but in the order of inputs given.
Question: how to tell ffmpeg to read inputs simaltaneousely?
If using amix with two local file work perfectly, you can't make to play two audio at once.
When the input is from local file or streaming, ffmpeg know exactly its start time. so it can be mixed into one audio.
But when the input is from Live streaming, ffmpeg doesn't know exactly "when it starts" so start time should be different in different streaming URL.
More Important thing is that ffmpeg does not support concurrency when handle input. that's why order of "hello" depends on order of inputs.
I just know one solution to solve this. Adobe FMLE(Flash Media Live Encoder), which support time code when using RTMP streaming. You can get time code from both live streaming, then you can finally mix two audio into one.
Perhaps, you can start with this article : http://www.overdigital.com/2013/03/25/3ways-to-sync-data/
Try
ffmpeg -i rtsp://user:pass#172.22.5.202 -i rtsp://user:pass#172.22.5.202 \
-filter_complex \
"[0:a]asetpts=PTS-STARTPTS[a1];[1:a]asetpts=PTS-STARTPTS[a2]; \
[a1][a2]amix=inputs=2:duration=first:dropout_transition=3[a] \
-map [a] -ar 22050 -vn -f flv rtmp://172.22.45.38:1935/live/stream1

ffmpeg: Image2 => Error while opening encoder

I get the following error ffmpeg.
Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
ffmpeg -f image2 -i %05d.jpg -vcodec libx264 foo.mp4
I'm pretty sure I've used this exact command before and it's been fine. This is my terminal output. Any help would be appreciated.
$ ffmpeg -f image2 -i %05d.jpg -vcodec libx264 foo.mp4
ffmpeg version 1.0 Copyright (c) 2000-2012 the FFmpeg developers
built on Nov 22 2012 17:59:05 with Apple clang version 4.1 (tags/Apple/clang-421.11.66) (based on LLVM 3.1svn)
configuration: --prefix=/opt/local --enable-swscale --enable-avfilter --enable-libmp3lame --enable-libvorbis --enable-libopus --enable-libtheora --enable-libschroedinger --enable-libopenjpeg --enable-libmodplug --enable-libvpx --enable-libspeex --enable-libfreetype --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/clang --arch=x86_64 --enable-yasm --enable-gpl --enable-postproc --enable-libx264 --enable-libxvid
libavutil 51. 73.101 / 51. 73.101
libavcodec 54. 59.100 / 54. 59.100
libavformat 54. 29.104 / 54. 29.104
libavdevice 54. 2.101 / 54. 2.101
libavfilter 3. 17.100 / 3. 17.100
libswscale 2. 1.101 / 2. 1.101
libswresample 0. 15.100 / 0. 15.100
libpostproc 52. 0.100 / 52. 0.100
Input #0, image2, from '%05d.jpg':
Duration: 00:00:04.44, start: 0.000000, bitrate: N/A
Stream #0:0: Video: mjpeg, yuvj420p, 1201x900 [SAR 1:1 DAR 1201:900], 25 fps, 25 tbr, 25 tbn, 25 tbc
[libx264 # 0x7fab0881aa00] width not divisible by 2 (1201x900)
Output #0, mp4, to 'foo.mp4':
Stream #0:0: Video: h264, yuvj420p, 1201x900 [SAR 1:1 DAR 1201:900], q=-1--1, 90k tbn, 25 tbc
Stream mapping:
Stream #0:0 -> #0:0 (mjpeg -> libx264)
Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
You have to crop the input image so that the resulting width and height was divided by 2
Crop filter:
-vf "crop=in_w-1:in_h"
$ ffmpeg -f image2 -i %05d.jpg -vf "crop=in_w-1:in_h" -vcodec libx264 foo.mp4
UPD
We can write the formula for the general case, which leads to the even sides
$ ffmpeg -f image2 -i %05d.jpg -vf "crop=((in_w/2)*2):((in_h/2)*2)" -vcodec libx264 foo.mp4
A solution which worked for me was to use
-vf scale=1920:1080
As an option before the output video, i.e.
ffmpeg -y -loop 1 -i "input.png" -c:v libx264 -t 5 -pix_fmt yuv420p -vf scale=1920:1080 out.mp4
It automatically resizes images correctly, though I never tested what happens when the resolution was larger than 1920:1080 (TBD).

Resources