ffmpeg overlay with scale in filter_complex - ffmpeg

im trying to get the scale in video input 0 to scale down but get error
exec_static /usr/bin/ffmpeg -threads 1 -i "rtmp://url" -stream_loop -1 -i /slider.m
p4 -filter_complex "[0:v]scale=1920:1080[1:v] overlay=20:main_h-overlay_h-80" -c:v h264 -c:a aac -b:v 1980k -b:a 64k -an -tune zerolatency -preset ultrafast -f flv rtmp://localhost:1935/live/std 2>>/var/log/nginx/ffmpeg-std.lo
g;
[AVFilterGraph # 0x556874510700] Unable to parse graph description substring: "overlay=20:main_h-overlay_h-80"
Error initializing complex filters.
i was trying to scale down the video input
Input #0, flv, from 'rtmp://url':
Metadata:
displayWidth : 2304
displayHeight : 1296
Duration: 00:00:00.00, start: 53669.778000, bitrate: N/A
Stream #0:0: Data: none
Stream #0:1: Video: h264 (Baseline), yuv420p(progressive), 2304x1296, 12 fps, 12 tbr, 1k tbn
Stream #0:2: Audio: aac (LC), 16000 Hz, mono, fltp

Related

FFMPEG SreenRecorder with Audio does not add DrawText

I am using FFMPEG to record the whole screen using gdigrab as well as recording the microphone audio and virtual-audio-capturer. It has taken a good while, but I got it to work and save as an mkv file. I am using Vb.net to pass the string to FFMPEG. Here is the string.
"/k ffmpeg.exe -y -rtbufsize 1500M -f dshow -i audio=#device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{16F2BCE9-4F86-4C29-8B2C-B70508551DC7} -f dshow -i audio=#device_sw_{33D9A762-90C8-11D0-BD43-00A0C911CE86}{8E146464-DB61-4309-AFA1-3578E927E935} -f gdigrab -framerate 50 -i desktop -codec:v h264_nvenc -qp 0 -vf drawtext=fontfile=C:\Windows\ARLRDBD.TTF:Text=" & MyProgName & "fontcolor=white:fontsize=24:box=1:shadowcolor=darkblue:shadowx=1:shadowy=1:boxcolor=blue#0.6:boxborderw=5:x=50:y=H-th-50:-filter_complex [0:a][1:a]amerge=inputs=2[a] -map 2 -map [a] " & str & "\Recordings\ScreenRecorder" & FileTime & ".mkv"
The problem is that it will not draw the text on the screen and I get the following error.
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, dshow, from 'audio=#device_cm_{33D9A762-90C8-11D0-BD43-
00A0C911CE86}\wave_{16F2BCE9-4F86-4C29-8B2C-B70508551DC7}':
Duration: N/A, start: 840168.175000, bitrate: 1411 kb/s
Stream #0:0: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, dshow, from 'audio=#device_sw_{33D9A762-90C8-11D0-BD43-
00A0C911CE86}\{8E146464-DB61-4309-AFA1-3578E927E935}':
Duration: N/A, start: 840168.949000, bitrate: 1536 kb/s
Stream #1:0: Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s
[gdigrab # 000001b92bad9880] Capturing whole desktop as
1366x768x32 at (0,0)
[gdigrab # 000001b92bad9880] Stream #0: not enough frames to
estimate rate; consider increasing probesize
Input #2, gdigrab, from 'desktop':
Duration: N/A, start: 1602928621.299032, bitrate: 1678562 kb/s
Stream #2:0: Video: bmp, bgra, 1366x768, 1678562 kb/s, 50 fps,
1000k tbr, 1000k tbn, 1000k tbc
[NULL # 000001b92badb300] Unable to find a suitable output format
for '[0:a][1:a]amerge=inputs=2[a]'
[0:a][1:a]amerge=inputs=2[a]: Invalid argument
Received stop event after 9 passes
If anyone can see where I am going wrong and point me in the right direction it would be really appreciated.
Don't use both -vf and -filter_complex. Just use -filter_complex:
"/k ffmpeg.exe -y -rtbufsize 1500M -f dshow -i audio=#device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{16F2BCE9-4F86-4C29-8B2C-B70508551DC7} -f dshow -i audio=#device_sw_{33D9A762-90C8-11D0-BD43-00A0C911CE86}{8E146464-DB61-4309-AFA1-3578E927E935} -f gdigrab -framerate 50 -i desktop -codec:v h264_nvenc -qp 0 -filter_complex drawtext=fontfile=C:\Windows\ARLRDBD.TTF:text=" & MyProgName & "fontcolor=white:fontsize=24:box=1:shadowcolor=darkblue:shadowx=1:shadowy=1:boxcolor=blue#0.6:boxborderw=5:x=50:y=H-th-50[v];[0:a][1:a]amerge=inputs=2[a] -map [v] -map [a] " & str & "\Recordings\ScreenRecorder" & FileTime & ".mkv"

ffmpeg filter complex error ( burning subtitles used overlay filter)

I try to burn dvb subtitles, based image, on video used ffmpeg overlay filter. but I failed because wrong using filter complex.
It's my command line.
./ffmpeg -y -hwaccel cuda -hwaccel_output_format cuda -hwaccel_device 0 \
-i input.ts \
-filter_complex "[v:0][s:3]overlay[overlay];[overlay]hwupload_cuda[base];[base]scale_npp=1920:1080[v1];[base]scale_npp=1920:1080[v2];[base]scale_npp=1280:720[v3];[base]scale_npp=720:480[v4];[base]scale_npp=480:360[v5]" \
-map "[v1]" -map 0:a -c:v hevc_nvenc -b:v 6000000 -maxrate 7000000 -bufsize 12000000 -g 15 -c:a libfdk_aac -ar 48000 -ac 2 -pkt_size 128000 -f mpegts test_1.ts \
-map "[v2]" -map 0:a -c:v h264_nvenc -an -b:v 4000000 -maxrate 5000000 -bufsize 8000000 -g 15 -f mpegts test_2.ts \
-map "[v3]" -map 0:a -c:v h264_nvenc -an -b:v 2500000 -maxrate 3500000 -bufsize 5000000 -g 15 -f mpegts test_3.ts \
-map "[v4]" -map 0:a -c:v h264_nvenc -an -b:v 1500000 -maxrate 2500000 -bufsize 3000000 -g 15 -f mpegts test_4.ts \
-map "[v5]" -map 0:a -c:v h264_nvenc -an -b:v 800000 -maxrate 1800000 -bufsize 2000000 -g 15 -f mpegts test_5.ts
but I failed. It is error messages.
Input #0, mpegts, from 'input.ts':
Duration: N/A, start: 22881.964411, bitrate: N/A
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100](eng): Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, 5.1(side), fltp, 384 kb/s
Stream #0:1[0x101](ind): Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, stereo, fltp, 192 kb/s
Stream #0:2[0x102](zho): Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, stereo, fltp, 192 kb/s
Stream #0:3[0x103](kho): Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, stereo, fltp, 192 kb/s
Stream #0:4[0x104]: Video: h264 (High), 1 reference frame ([27][0][0][0] / 0x001B), yuv420p(top first, left), 1920x1080 (1920x1088) [SAR 1:1 DAR 16:9], 25 fps, 50 tbr, 90k tbn, 50 tbc
Stream #0:5[0x105](CHI): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
Stream #0:6[0x106](CHS): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
Stream #0:7[0x107](IND): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
Stream #0:8[0x108](THA): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
Stream #0:9[0x109](MAN): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
Stream #0:10[0x10a](MON): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
Stream #0:11[0x10b](BUR): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
Stream #0:12[0x10c](ENG): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
[mpegts # 0x47cbd00] Invalid stream specifier: base.
Last message repeated 17 times
Stream specifier 'base' in filtergraph description [v:0][s:3]overlay[overlay];[overlay]hwupload_cuda[base];[base]scale_npp=1920:1080[v1];[base]scale_npp=1920:1080[v2];[base]scale_npp=1280:720[v3];[base]scale_npp=720:480[v4];[base]scale_npp=480:360[v5] matches no streams.
My plan is this.
How to do burn subtitle on video using filter complex, ffmpeg from this structure?
You can not re-use outputs from a filterchain. You can use the split / asplit filters to make copies of a filter output to be used multiple times.
Simplified example:
ffmpeg -i input.ts -i logo.png \
-filter_complex "[0][1]overlay,split=outputs=3[s1][s2][s3];[s1]scale=-2:1080[1080];[s2]scale=-2:720[720];[s3]scale=-2:360[360]" \
-map "[1080]" -map 0:a 1080.ts \
-map "[720]" -map 0:a 720.ts \
-map "[360]" -map 0:a 360.ts

ffmpeg map mkv with multiple audio to mkv with more audio

I've an MKV like this:
Input #0, matroska,webm, from 'MyVideo.mkv':
Stream #0:0(eng): Video: h264 (High), yuv420p(progressive), 1920x1080, 23.98 fps
Stream #0:1(fre): Audio: dts (DTS), 48000 Hz, 5.1(side), fltp, 768 kb/s (default)
Stream #0:2(eng): Audio: dts (DTS), 48000 Hz, 5.1(side), fltp, 1536 kb/s
I've would like an output MKV like this:
Output:
Stream #0:0(eng): Video: h264 (High), yuv420p(progressive), 1920x1080, 23.98 fps
Stream #0:1(fre): Audio: E-AC-3, 5.1, 384 kb/s
Stream #0:2(fre): Audio: E-AC-3, stereo, 384 kb/s
Stream #0:3(eng): Audio: E-AC-3, 5.1, 384 kb/s
Stream #0:4(eng): Audio: E-AC-3, stereo, 384 kb/s
Following this page: https://trac.ffmpeg.org/wiki/Map (Example 1), I'm doing this:
ffmpeg -i MyVideo.mkv \
-map 0:0 -map 0:1 -map 0:1 -map 0:2 -map 0:2 \
-c:v copy
-c:a:0 eac3 -ab 384k -ac 6 \
-c:a:1 eac3 -ab 384k -ac 2 \
-c:a:2 eac3 -ab 384k -ac 6 \
-c:a:3 eac3 -ab 384k -ac 2
output.mkv
But I'm getting this output:
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'output.mkv':
Stream #0:0(eng): Video: h264 (High) (High), yuv420p(progressive), 1920x1080, 23.98 fps
Stream #0:1(fre): Audio: eac3 (ec-3), 48000 Hz, stereo, fltp, 384 kb/s
Stream #0:2(fre): Audio: eac3 (ec-3), 48000 Hz, stereo, fltp, 384 kb/s
Stream #0:3(eng): Audio: eac3 (ec-3), 48000 Hz, stereo, fltp, 384 kb/s
Stream #0:4(eng): Audio: aac (LC) (mp4a), 48000 Hz, stereo, fltp, 378 kb/s
Do you have an idea what is my mistake ? I think it's around my map and -c:a:x parameters
Found my error, missing ab:x and ac:x to specify the bitrate and the channel to each output audio stream
from:
-c:a:0 eac3 -ab 384k -ac 6
-c:a:1 eac3 -ab 384k -ac 2
-c:a:2 eac3 -ab 384k -ac 6
-c:a:3 eac3 -ab 384k -ac 2
to:
-c:a:0 eac3 -ab:0 384k -ac:0 6
-c:a:1 eac3 -ab:1 384k -ac:1 2
-c:a:2 eac3 -ab:2 384k -ac:2 6
-c:a:3 eac3 -ab:3 384k -ac:3 2

Chaining drawtext to overlay in FFMPEG

I am trying to add some text to a video using draw text. Is it also possible to use fontcolor to white?
Here is my command I am trying
ffmpeg -i test.mp4 -i Watermark.png -filter_complex "[0:v][1:v] overlay=0:0:enable='between(t,5,30)'[v]; [0]volume=0:enable='between(t,5,30)'[a];drawtext=fontsize=50:fontfile=FreeSerif.ttf:text='This screen is redacted':x=(w-text_w)/2:y=(h-text_h)/2" -map "[v]" -map "[a]" -preset ultrafast output.mp4
I get - Cannot find a matching stream for unlabeled input pad 0 on filter Parsed_drawtext_2
With the below answer it is failing for this video.
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'austin.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: isommp42
creation_time : 2018-07-14T04:40:00.000000Z
location : -34.8857+138.5805/
location-eng : -34.8857+138.5805/
com.android.version: 8.1.0
com.android.capture.fps: 30.000000
Duration: 00:01:12.00, start: 0.000000, bitrate: 48200 kb/s
Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuvj420p(pc, smpte170m), 3840x2160, 48060 kb/s, SAR 1:1 DAR 16:9, 30 fps, 30 tbr, 90k tbn, 180k tbc (default)
Metadata:
creation_time : 2018-07-14T04:40:00.000000Z
handler_name : VideoHandle
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 96 kb/s (default)
Metadata:
creation_time : 2018-07-14T04:40:00.000000Z
handler_name : SoundHandle
I keep getting this error:
x264 [error]: malloc of size 22688384 failedme=00:00:01.00 bitrate= 0.4kbits/s
speed= 1x
Video encoding failed
[aac # 000002727e589f00] Qavg: 4246.095
[aac # 000002727e589f00] 2 frames left in the queue on closing
Conversion failed!
Use
ffmpeg -i test.mp4 -i Watermark.png -filter_complex "[0:v][1:v] overlay=0:0:enable='between(t,5,30)', drawtext=fontfile=FreeSerif.ttf:fontcolor=white:fontsize=50:text='This screen is redacted':x=(w-text_w)/2:y=(h-text_h)/2[v];[0]volume=0:enable='between(t,5,30)'[a]" -map "[v]" -map "[a]" -preset ultrafast output.mp4

Can't record S-Video with avconv

I've been trying to record feed from my S-Video cable using avconv. I am able to record composite video with avconv, but the quality isn't the best. To set the input, I use v4l2-ctl -i $n, where $n is either 0 for composite, or 1 for S-Video. I tried to use v4l2-ctl -i 1 to set the input, but that doesn't work. Oddly enough, when I use tvtime or qv4l2 I can view the video.
I am able to record audio, just not the video. In tvtime I can get audio as well as video. Also, I was able to record the S-Video with ffmpeg using the -channel option. ffmpeg, btw, can't record the audio, and recording the separate audio isn't an option.
Edit: as per Anton's request, here's the command I use to capture video with avconv.
avconv -f video4linux2 -i /dev/video0 -f alsa -i hw:2,0 -vcodec mpeg4 -vtag xvid -b 8000k -r 30000/1001 -acodec \
libmp3lame -ar 48000 -ac 2 -ab 192k -aspect 16:9 -vf yadif=0,scale=1200:800 -y test.avi
And here's the output from this command:
avconv version 0.8.6-6:0.8.6-0ubuntu0.12.10.1, Copyright (c) 2000-2013 the Libav developers
built on Apr 2 2013 17:02:16 with gcc 4.7.2
[video4linux2 # 0x982340] Estimating duration from bitrate, this may be inaccurate
Input #0, video4linux2, from '/dev/video0':
Duration: N/A, start: 1368113780.210591, bitrate: 165722 kb/s
Stream #0.0: Video: rawvideo, yuyv422, 720x480, 165722 kb/s, 29.97 tbr, 1000k tbn, 29.97 tbc
[alsa # 0x982ba0] Estimating duration from bitrate, this may be inaccurate
Input #1, alsa, from 'hw:2,0':
Duration: N/A, start: 854.715783, bitrate: N/A
Stream #1.0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s
Incompatible pixel format 'yuyv422' for codec 'mpeg4', auto-selecting format 'yuv420p'
[buffer # 0x9930a0] w:720 h:480 pixfmt:yuyv422
[yadif # 0x997960] mode:0 parity:-1 auto_enable:0
[yadif # 0x997960] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'Parsed filter 0 yadif'
[scale # 0x985a80] w:720 h:480 fmt:yuyv422 -> w:720 h:480 fmt:yuv420p flags:0x4
[scale # 0x998000] w:720 h:480 fmt:yuv420p -> w:1200 h:800 fmt:yuv420p flags:0x4
Output #0, avi, to 'test.avi':
Metadata:
ISFT : Lavf53.21.1
Stream #0.0: Video: mpeg4, yuv420p, 1200x800 [PAR 32:27 DAR 16:9], q=2-31, 8000 kb/s, 29.97 tbn, 29.97 tbc
Stream #0.1: Audio: libmp3lame, 48000 Hz, 2 channels, s16, 192 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (rawvideo -> mpeg4)
Stream #1:0 -> #0:1 (pcm_s16le -> libmp3lame)
Try this:
avconv -f video4linux2 -i /dev/video0 -f alsa -ac 1 -i hw:2,0 \
-vcodec mpeg4 -vtag xvid -b 8000k -r 30000/1001 -acodec \
libmp3lame -ar 48000 -ac 2 -ab 192k -aspect 16:9
-vf yadif=0,scale=1200:800 -y test.avi
Note the -ac 1, you must set the audio channel. Also note the \ for line breakings.
The -channel option doesn't work with avconv? It should work the same as in ffmpeg. Also, provide your full commandline and avconv output.

Resources