capture RTSP stream from IP camera ffmpeg - ffmpeg

I used the following command to get the frames from RTSP h264 codec. I could not able to get the frames from the ip camera.
$ ffmpeg -i rtsp://xxxx:yyy#192.168.1.yy:xx/tcp/av0_0 -f image2 -vf fps=fps=1/120 img%03d.jpg
My output
ffmpeg version 3.1.1 Copyright (c) 2000-2016 the FFmpeg developers built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.1)
configuration: --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-nonfree --enable-postproc --enable-version3 --enable-x11grab --disable-yasm
libavutil 55. 28.100 / 55. 28.100
libavcodec 57. 48.101 / 57. 48.101
libavformat 57. 41.100 / 57. 41.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 47.100 / 6. 47.100
libswscale 4. 1.100 / 4. 1.100
libswresample 2. 1.100 / 2. 1.100
libpostproc 54. 0.100 / 54. 0.100
[rtsp # 0x2dba3a0] CSeq 6 expected, 0 received.
Last message repeated 5 times
[rtsp # 0x2dba3a0] Could not find codec parameters for stream 0 (Video: h264, none): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, rtsp, from
'rtsp://xx:yy#192.168.1.xx:yy/tcp/av0_0':
Metadata:
title : streamed by the RTSP server
Duration: N/A, start: 0.000000, bitrate: 64 kb/s
Stream #0:0: Video: h264, none, 90k tbr, 90k tbn, 180k tbc
Stream #0:1: Audio: pcm_alaw, 8000 Hz, 1 channels, s16, 64 kb/s
Output #0, image2, to 'img%03d.jpg':
Output file #0 does not contain any stream
Exiting normally, received signal 2.

I need to use rtsp_transport tcp. The following command works.
ffmpeg -rtsp_transport tcp -i rtsp://bb:cc#192.168.1.xx:yy/tcp/av0_0 -f image2 -vf fps=fps=1 hello/img%03d.png

Related

FFmpeg: Unable to find a suitable output format for 'mpegts'

I am using the following command on several video streams to pipe them into my TVHeadend server.
pipe:///usr/bin/ffmpeg -i *URL* -c copy -metadata service_provider="My Provider" -metadata service_name="My Service"-f mpegts pipe:1
This command works fine for most of the streams, but a few are throwing this error...
ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 9.3.0 (Alpine 9.3.0)
configuration: --prefix=/usr --enable-avresample --enable-avfilter --enable-gnutls --enable-gpl --enable-libass --enable-libmp3lame --enable-libvorbis --enable-libvpx --enable-libxvid --enable-libx264 --enable-libx265 --enable-libtheora --enable-libv4l2 --enable-libdav1d --enable-postproc --enable-pic --enable-pthreads --enable-shared --enable-libxcb --enable-libssh --disable-stripping --disable-static --disable-librtmp --enable-vaapi --enable-vdpau --enable-libopus --enable-libaom --disable-debug
libavutil 56. 51.100 / 56. 51.100
libavcodec 58. 91.100 / 58. 91.100
libavformat 58. 45.100 / 58. 45.100
libavdevice 58. 10.100 / 58. 10.100
libavfilter 7. 85.100 / 7. 85.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 7.100 / 5. 7.100
libswresample 3. 7.100 / 3. 7.100
libpostproc 55. 7.100 / 55. 7.100
[hls # 0x7f7c03b0c5c0] Skip ('#EXT-X-VERSION:3')
[hls # 0x7f7c03b0c5c0] Opening '****' for reading
[hls # 0x7f7c03b0c5c0] Opening '****' for reading
Input #0, hls, from '****:
Duration: N/A, start: 3116.333433, bitrate: N/A
Program 0
Metadata:
variant_bitrate : 0
Stream #0:0: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
Metadata:
variant_bitrate : 0
Stream #0:1: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp
Metadata:
variant_bitrate : 0
[NULL # 0x55f1a0fcec80] Unable to find a suitable output format for 'mpegts'
mpegts: Invalid argument
The problem streams do work.
I searched before asking this question and everything I found was related to conversion.
Can anyone see what I am doing wrong?
The reason behind the using FFmpeg and piping is to eliminate issues with stream freezes. FFmpeg just handles hitches etc better than simply adding the URL straight into my TVHeadend server.
Change from
-metadata service_name="My Service"-f mpegts pipe:1
to
-metadata service_name="My Service" -f mpegts pipe:1

Slideshow video created by ffmpeg with scale+pad+vstack filters only contains one image

$ ls -1 *.jpg
1.jpg # could be any size, here is 210×315
2.jpg # could be any size, here is 480x480
3.jpg # could be any size, here is 480x480
$ ls -1 *.png
bg.png # 480x160
$ ffmpeg -y -r 0.5 -pattern_type glob -i '*.jpg' -i bg.png -filter_complex 'scale=iw*min(480/iw\,480/ih):ih*min(480/iw\,480/ih),pad=480:480:(480-iw*min(480/iw\,480/ih))/2:(480-ih*min(480/iw\,480/ih))/2,vstack' -vsync vfr -c:v libx264 -pix_fmt yuv420p out.mp4
...
$ ffprobe out.mp4
ffprobe version 3.3.2 Copyright (c) 2007-2017 the FFmpeg developers
built with Apple LLVM version 8.1.0 (clang-802.0.42)
configuration: --prefix=/usr/local/Cellar/ffmpeg/3.3.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libfreetype --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-vda
libavutil 55. 58.100 / 55. 58.100
libavcodec 57. 89.100 / 57. 89.100
libavformat 57. 71.100 / 57. 71.100
libavdevice 57. 6.100 / 57. 6.100
libavfilter 6. 82.100 / 6. 82.100
libavresample 3. 5. 0 / 3. 5. 0
libswscale 4. 6.100 / 4. 6.100
libswresample 2. 7.100 / 2. 7.100
libpostproc 54. 5.100 / 54. 5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'out.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.71.100
Duration: 00:00:02.00, start: 0.000000, bitrate: 152 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 480x640 [SAR 1:1 DAR 3:4], 149 kb/s, 0.50 fps, 0.50 tbr, 16384 tbn, 1 tbc (default)
Metadata:
handler_name : VideoHandler
The video size 480x640 is correct, but it only lasts 2 seconds, only contains the first image 1.jpg, please tell me how I can solve this problem?
-------

FFMPEG options differences between two videos

So I'm working with a video build out of pngs. Making a video hasn't been too hard thanks to ffmpeg however most of the videos I've made work great playing forward and are extremely choppy playing backwards.
Using a program named MPEG Streamclip plus Handbrake I managed to convert my video to one that plays great forward and backward. But now I can't figure out how to pass in the right options to ffmpeg to replicate this video.
Using ffprobe I have some outputs of the good and bad video. What options am I missing?
Bad Video:
$ ffprobe tea_ffmpeg.mov
ffprobe version 3.0 Copyright (c) 2007-2016 the FFmpeg developers
built with Apple LLVM version 7.0.2 (clang-700.1.81)
configuration: --prefix=/usr/local/Cellar/ffmpeg/3.0 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-opencl --enable-libx264 --enable-libmp3lame --enable-libxvid --enable-vda
libavutil 55. 17.103 / 55. 17.103
libavcodec 57. 24.102 / 57. 24.102
libavformat 57. 25.100 / 57. 25.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 31.100 / 6. 31.100
libavresample 3. 0. 0 / 3. 0. 0
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'tea_ffmpeg.mov':
Metadata:
major_brand : qt
minor_version : 512
compatible_brands: qt
encoder : Lavf57.25.100
Duration: 00:00:08.04, start: 0.000000, bitrate: 1140 kb/s
Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 676x450 [SAR 675:676 DAR 3:2], 1138 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
Metadata:
handler_name : DataHandler
encoder : Lavc57.24.102 libx264
Good Video:
$ ffprobe test.mov
ffprobe version 3.0 Copyright (c) 2007-2016 the FFmpeg developers
built with Apple LLVM version 7.0.2 (clang-700.1.81)
configuration: --prefix=/usr/local/Cellar/ffmpeg/3.0 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-opencl --enable-libx264 --enable-libmp3lame --enable-libxvid --enable-vda
libavutil 55. 17.103 / 55. 17.103
libavcodec 57. 24.102 / 57. 24.102
libavformat 57. 25.100 / 57. 25.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 31.100 / 6. 31.100
libavresample 3. 0. 0 / 3. 0. 0
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test.mov':
Metadata:
major_brand : qt
minor_version : 537199360
compatible_brands: qt
creation_time : 2016-03-09 15:16:37
Duration: 00:00:08.04, start: 0.000000, bitrate: 2650 kb/s
Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, smpte170m/smpte170m/bt709), 674x450, 2646 kb/s, 25 fps, 25 tbr, 25k tbn, 50k tbc (default)
Metadata:
creation_time : 2016-03-09 15:16:47
handler_name : Apple Alias Data Handler
encoder : H.264
FFMPEG Command so far:
ffmpeg -y -i 'pngs/tea-%03d.png' -vf scale=674:-2 -vcodec libx264 -pix_fmt yuv420p -r 25 tea_ffmpeg.mov
I understand mov vs mp4 should just be a container spec, but mov was the first I got working. I'm more than happy to use mp4.
The main thing that stands out is the profile. So,
ffmpeg -y -i 'pngs/tea-%03d.png' -vf scale=674:-2 -vcodec libx264 -profile:v main -pix_fmt yuv420p -r 25 tea_ffmpeg.mov
To be safer, you can use baseline profile and small GOP sizes (at some cost to file size)
ffmpeg -y -i 'pngs/tea-%03d.png' -vf scale=674:-2 -vcodec libx264 -profile:v baseline -g 12 -pix_fmt yuv420p -r 25 tea_ffmpeg.mov

Convert audio file using another audio file as template in ffmpeg

I have some .mp3 audio files, with different "configuration" like sample rate, bit rate, etc.
For my app, one of them is working and the rest, not.
How can I convert the rest of them using the working file's "configuration"?
Metadata of two sample files:
~/Downloads ❯ ffmpeg -i working.mp3 -i not_working.mp3
ffmpeg version 2.8.3 Copyright (c) 2000-2015 the FFmpeg developers
built with Apple LLVM version 7.0.0 (clang-700.1.76)
configuration: --prefix=/usr/local/Cellar/ffmpeg/2.8.3 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-opencl --enable-libx264 --enable-libmp3lame --enable-libvo-aacenc --enable-libxvid --enable-vda
libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
[mp3 # 0x7fd2d380da00] Skipping 0 bytes of junk at 33.
[mp3 # 0x7fd2d380da00] Estimating duration from bitrate, this may be inaccurate
Input #0, mp3, from 'working.mp3':
Metadata:
encoder : Lavf52.64.2
Duration: 00:00:00.65, start: 0.000000, bitrate: 64 kb/s
Stream #0:0: Audio: mp3, 22050 Hz, mono, s16p, 64 kb/s
[mp3 # 0x7fd2d4008800] Skipping 0 bytes of junk at 417.
Input #1, mp3, from 'not_working.mp3':
Duration: 00:00:01.83, start: 0.025057, bitrate: 46 kb/s
Stream #1:0: Audio: mp3, 44100 Hz, stereo, s16p, 46 kb/s
Metadata:
encoder : LAME3.99r
You probably need to change the channel layout and/or sample rate:
ffmpeg -i input.mp3 -c:a libmp3lame -ar 22050 -ac 1 output.mp3
Try 3 commands: one without -ar 22050, one without -ac 1, and one with both as shown above.

Ffmpeg and png watermark on OSX error

I am trying to add watermark with transparent background on OSX with ffmpeg.
I am using this command:
ffmpeg -i test.mpg -vf "movie=stuff.png, scale=100:100 [watermark]; [in][watermark] overlay=main_w-overlay_w:main_h-overlay_h-10 [out]" out.mpg
And I am getting this:
ffmpeg version 0.11.1 Copyright (c) 2000-2012 the FFmpeg developers
built on Jun 12 2012 21:37:10 with clang 3.1 (tags/Apple/clang-318.0.54)
configuration: --prefix=/usr/local/Cellar/ffmpeg/0.11.1 --enable-shared --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --enable-libfreetype --cc=/usr/bin/clang --enable-libx264 --enable-libfaac --enable-libmp3lame --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libxvid --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libass --enable-libvo-aacenc --disable-ffplay
libavutil 51. 54.100 / 51. 54.100
libavcodec 54. 23.100 / 54. 23.100
libavformat 54. 6.100 / 54. 6.100
libavdevice 54. 0.100 / 54. 0.100
libavfilter 2. 77.100 / 2. 77.100
libswscale 2. 1.100 / 2. 1.100
libswresample 0. 15.100 / 0. 15.100
libpostproc 52. 0.100 / 52. 0.100
Input #0, mpeg, from 'test.mpg':
Duration: 00:00:01.04, start: 1.000000, bitrate: 9106 kb/s
Stream #0:0[0x1e0]: Video: mpeg1video, yuv420p, 640x480 [SAR 1:1 DAR 4:3], 104857 kb/s, 24 fps, 24 tbr, 90k tbn, 24 tbc
Stream #0:1[0x1c0]: Audio: mp2, 44100 Hz, stereo, s16, 128 kb/s
File 'out.mpg' already exists. Overwrite ? [y/N] y
w:640 h:480 pixfmt:yuv420p tb:1/90000 sar:1/1 sws_param:flags=2
[buffersink # 0x7ff50bc1c3e0] No opaque field provided
[png # 0x7ff50c038400] unsupported bit depth 16 and color type 4
[image2 # 0x7ff50c044800] decoding for stream 0 failed
[image2 # 0x7ff50c044800] Could not find codec parameters (Video: png, 640x480)
[movie # 0x7ff50bc17200] Failed to find stream info
[movie # 0x7ff50bc17200] seek_point:0 format_name:(null) file_name:stuff.png stream_index:0
[scale # 0x7ff50bc178c0] auto-inserting filter 'auto-inserted scaler 0' between the filter 'Parsed_movie_0' and the filter 'Parsed_scale_1'
Impossible to convert between the formats supported by the filter 'Parsed_movie_0' and the filter 'auto-inserted scaler 0'
Error opening filters!
I thought I am missing png support and I checked homebrew for libpng install, but it turn out png support is already included by Apple with OSX.
Also I did
ffmpeg -codecs list | grep -i png
and I do have PNG support in ffmpeg:
DEV D png PNG (Portable Network Graphics) image
Is yor file 16bit grayscale with alpha? It looks like ffmpeg doesn't support this combination. Try converting it to 8bits and/or RGBA.

Resources