I'm trying to do some analysis of image-based subtitles by outputting them as a sequence of PNGs to a pipe. My command line looks like this:
ffmpeg -y -i $INPUTFILE -f lavfi -i color=c=black:s=1920x1080 -filter_complex "[1:v][0:s:5]overlay[v]" -shortest -map "[v]" -c:v png -f image2pipe - | pike subspng.pike
In theory, -shortest should mean that the stream stops at the shortest input, which would be roughly seven minutes of input file. Instead, my script receives an infinite sequence of black frames after the last frame of subtitles, until I send FFMPEG a SIGINT. Placing -shortest before -filter_complex has the same effect.
Is there a different way to force the filtering to stop at the end of the input file?
EDIT: Using the shortest=1 flag on the overlay filter also doesn't help, even in combination with -shortest.
Use the shortest option in the overlay filter:
shortest
If set to 1, force the output to terminate when the shortest input terminates. Default value is 0.
ffmpeg -y -i $INPUTFILE -f lavfi -i color=c=black:s=1920x1080 -filter_complex "[1:v][0:s:5]overlay=shortest=1[v]" -map "[v]" -c:v png -f image2pipe - | pike subspng.pike
Building on the suggestions from both llogan and Gyan, here's what I came up with:
ffmpeg -y -i $INPUTFILE -filter_complex "[0:v]drawbox=c=black:t=fill[black]; [black][0:s:5]overlay=shortest=1[v]" -map "[v]" -c:v png -f image2pipe - | pike subspng.pike
Instead of creating an infinite-length source of black frames, this instead takes the original video stream, then covers the entire thing with a black box. NOTE: May require a newish version of FFMPEG for the drawbox filter.
Related
I would like to cut (mulicut if possible else it's ok) a mp4 and generate the cut + a preview file in 360p.
My goal is to achieve something that looks like that :
`ffmpeg -y -progress /dev/stdout -i media.mp4
-vf "select='+between(t,0,25)',setpts=N/FRAME_RATE/TB"
-af "aselect='+between(t,0,25)’,asetpts=N/SR/TB"
-filter_complex split=2[mvideo][pvideo]
-map [mvideo] media_cut.mp4
-map [pvideo] -vf scale=-1:360 media_preview.mp4`
Here, a first -vf select filter to multicut the media, a split filter to generate both the cut media and a resized cut with a second -vf on scale that keep aspect ratio with a width of 360.
I can't mix filter with filter complex that's why i have no idea how to do it.
Thanks a lot for your tips.
You can do it one of two ways. 1) declare simple filtergraph for each output, or 2) do all filtering inside a complex filtergraph.
#1 Per-output simple filtergraph.
ffmpeg -y -progress /dev/stdout -i media.mp4
-vf "select='between(t,0,25)',setpts=N/FRAME_RATE/TB"
-af "aselect='between(t,0,25)’,asetpts=N/SR/TB"
media_cut.mp4
-vf "select='between(t,0,25)',setpts=N/FRAME_RATE/TB,scale=-2:360"
-af "aselect='between(t,0,25)’,asetpts=N/SR/TB"
media_preview.mp4
#2 A complex filtergraph.
ffmpeg -y -progress /dev/stdout -i media.mp4
-filter_complex
"[0:v]select='between(t,0,25)',setpts=N/FRAME_RATE/TB,split=2[mvideo][pvideo];
[pvideo]scale=-2:360[pvideo];
[0:a]aselect='between(t,0,25)’,asetpts=N/SR/TB,asplit=2[maudio][paudio]"
-map [mvideo] -map [maudio]
media_cut.mp4
-map [pvideo] -map [paudio]
media_preview.mp4
I am trying to concatenate multiple audio files and a single image into one video file using one command.
I have list of mp3 files and a playlist file (.m3u) in a direcotry.
I managed to do this but my solution is bad:
reading the playlist file and creating a new .txt in the ffmpeg required format
concatenating the audio files using the .txt into an .mp3
concatenating the large audio file and the static image into a video
This creates 2 unnecessary files that I have to delete.
I tried a different command
ffmpeg -loop 1 -framerate 1 -i myImage.jpg -i file1.mp3 -i file2.mp3 -i file3.mp3 -filter_complex '[0:0][1:0][2:0]concat=n=3:v=0:a=1' -tune stillimage -shortest output.mp4
however im getting a Error initializing complex filters.
Invalid argument error
Another kick in the nuts is that the system im working on has spaces in the folder names.
i tried using -i "concat:file1.mp3|file2.mp3|..." however i cannot use double quote marks to quote out the path so I get an invalid argument error.
Thank you very much for your help.
Method 1: concat demuxer
Make input.txt containing the following:
file 'file1.mp3'
file 'file2.mp3'
file 'file3.mp3'
Run ffmpeg:
ffmpeg -loop 1 -framerate 1 -i myImage.jpg -f concat -i input.txt -filter_complex "[0]scale='iw-mod(iw,2)':'ih-mod(ih,2)',format=yuv420p[v]" -map "[v]" -r 15 -tune stillimage -map 1:a -shortest -movflags +faststart output.mp4
All MP3 files being input to the concat demuxer must have the same channel layout and sample rate. If they do not then convert them using the -ac and -ar options so they are all the same.
Method 2: concat filter
Update: There seems to be a bug with -shortest not working with the concat filter (I keep forgetting about that). See the method above using the concat demuxer, or replace -shortest with -t. The value for -t should equal the total duration of all three MP3 files.
ffmpeg -loop 1 -framerate 1 -i myImage.jpg -i file1.mp3 -i file2.mp3 -i file3.mp3 -filter_complex "[0]scale='iw-mod(iw,2)':'ih-mod(ih,2)',format=yuv420p[v];[1:a][2:a][3:a]concat=n=3:v=0:a=1[a]" -map "[v]" -r 15 -map "[a]" -tune stillimage -shortest -movflags +faststart output.mp4
Option descriptions
scale filter makes image have even width and height which is required when outputting YUV 4:2:0 with libx264.
format filter sets chroma subsampling to 4:2:0, otherwise libx264 will try to limit subsampling, but most players can only handle 4:2:0.
concat filter is accepting file1.mp3, file2.mp3, and file3.mp3 as inputs. Your original command was trying to concat the video to the audio resulting in Invalid argument.
-map "[v]" chooses the video output from -filter_complex.
-r 15 sets output frame rate to 15 because most players can't handle 1 fps. This is faster than setting -framerate 15.
-map "[a]" chooses the audio output from -filter_complex.
-map 1:a chooses the audio from input #1 (the second input as counting starts from 0).
-movflags +faststart after encoding finishes this option moves some data from the end of the MP4 output file to the beginning. This allows playback to begin faster otherwise the complete file will have to be downloaded first.
In ffmpeg how do I keep the text in the same location while filtering? e.g. zooming
FFmpeg for linux
ffmpeg
-t 5
-i x.jpg -filter_complex "[0:v]drawtext=fontfile='ariblk.ttf':text='test text':fontsize=24:x=0.23333333333333*main_w: y=0.1325*main_h:fontcolor=#000000: alpha=1,zoompan=z='if(lte(zoom,1.0),1.5,max(1.001,zoom-0.0015))':d=125,fade=t=out:st=4:d=1[v0]; [v0]concat=n=1:v=1:a=0,format=yuv420p[v]"
-map "[v]"
-s "800x450"
-t 40 ./video.mp4
The text is zoomed in as well but I want it to keep the same size.
Perform the zoom before drawing text.
ffmpeg
-i x.jpg -filter_complex "[0:v]zoompan=z='if(lte(zoom,1.0),1.5,max(1.001,zoom-0.0015))':d=125:s=800x450,drawtext=fontfile='ariblk.ttf':text='test text':fontsize=24:x=0.23333333333333*main_w:y=0.1325*main_h:fontcolor=#000000: alpha=1,fade=t=out:st=4:d=1,format=yuv420p[v]"
-map "[v]"
-t 40 ./video.mp4
The concat is unnecessary; you only have one input.
I am using ffmpeg on Ubuntu 14.04 (Jon Severinsson's PPA) and am playing video files out of a folder - one by one.
First question I wasn't able to figure out yet - how can I add a simple overlay - 720p footage with 720p overlay (with partial transparency)? So there is no resize or alignment needed - just the 1:1 overlay. I tried a lot already with -vf and -filter_complex but didn't show up.
Second question - with concatenate, is it possible to have the switches between the files seamless? Best without creating a new file - so, on the fly? I need to reduce the gaps between the file switches or eliminate them completely.
This is my bash right now:
#!/usr/bin/env bash
while :; do
files=(*)
ffmpeg -re -i "${files[$RANDOM % ${#files[#]}]}" -acodec copy -vcodec copy -f flv ServerAddress
done
So I have everything in /vod - the videofiles, as well as the overlay.png
Thanks a bunch in advance,
Tim
For the overlay you need to scale the image to the original source dimensions.
To concat multiple source files that have the same codec use the concat demuxer.
Eg:
Make a playlist.txt with the following format:
file '/path/to/file_1'
file '/path/to/file_2'
file '/path/to/file_3'
[..]
And then:
ffmpeg -f concat -i playlist.txt -i overlay.png -filter_complex "[1:v] scale=1280:720 [ovr];[0:v][ovr] overlay=0:0" ...
If the video and the image are the same size you can just use:
ffmpeg -f concat -i playlist.txt -i overlay.png -filter_complex "[0:v] overlay"
Update:
Full example:
You cannot filter and copy the video stream at the same time!
ffmpeg -re -f concat -i playlist.txt -i overlay.png -filter_complex "[0:v] overlay" -c:v h264 -c:a libfdk_aac -ar 44100 -f flv rtmp://...
If your audio stream is valid and has one of the supported audio rates (44100, 22050, 11025) you can do:
ffmpeg -re -f concat -i playlist.txt -i overlay.png -filter_complex "[0:v] overlay" -c:v h264 -c:a copy -f flv rtmp://...
I am trying to create a video output from multiple video cameras.
Following the example given here Presenting more than 2 videos using FFmpeg
and other similar examples.
but Im getting the error
Output pad "default" for the filter "src" of type "buffer" not connected to any destination
when i run
ffmpeg -i /dev/video1 -i /dev/video0 -filter_complex "[0:0]pad=iw*2:ih[a];[a][1:0]overlay=w[b];[b][2:0]overlay=w:h" -shortest output.mp4
Im not really sure what this means or how to fix it.
Any help would be greatly appreciated!
Thanks.
When using the "padding" option, you have to specify which is the size of the output image and where you want to put the input image
[0:0]pad=iw*2:ih:0:0
tested under windows 7 with file of same size
ffmpeg -i out.avi -i out.avi -filter_complex "[0:0]pad=iw*2:ih:0:0[a];[a][1:0]overlay=w" -shortest output.mp4
and with WebCam Cap (vfwcap) and a still picture (as i have only o=1 WebCam). BTW you can see how to scale one the source to fit in the target (just in case your source have different resolution)
ffmpeg -y -f vfwcap -r 10 -i 0 -loop 1 -i photo.jpg -filter_complex "[0:0]pad=iw*2:ih:0:0[a];[1:0]scale=640:480[b];[a][b]overlay=w" -shortest output.mp4
under Linux:
ffmpeg -i /dev/video1 -i /dev/video0 -filter_complex "[0:0]pad=iw*2:ih:0:0[[a];a][1:0]overlay=w" -shortest output.mp4
if it doesn't work test a simple record of video 1 and after of video 0 and check their properties (type, resolution, fps).
ffmpeg -i /dev/video1 -shortest output1.mp4
ffmpeg -I output1.mp4
If you still have issue, update your question with ffmpeg console output (as text) for video and video 0 capture and also of the call with the overlay