add current video time with ffmpeg - ffmpeg

I can add frame number to video with ffmpeg as below:
ffmpeg -y -i input.mp4 -an -vf drawtext=fontsize=36:fontcolor=yellow:text='%{frame_num}':x=20:y=20 -f mp4 output.mp4
How can I modify it to show HH:MM:SS instead of current frame number?

As #kesh pointed out pts:hms instead of frame_num does the job but your specific command line, will lead you astray, as you have omitted a set of encapsulating quotes.
Whilst omitting them, works with frame_num, it will not work with pts:hms
Use:
ffmpeg -y -i input.mp4 -an -vf "drawtext=fontsize=36:fontcolor=yellow:text='%{pts\:hms}':x=20:y=20" -f mp4 output.mp4
Here the entire drawtext filter is wrapped in quotation marks.
Edit:
To achieve pure HH:MM:SS format use gmtime rather than hms i.e.
ffmpeg -y -i input.mp4 -an -vf "drawtext=fontsize=36:fontcolor=yellow:text=' %{pts\:gmtime\:0\:%T}':x=20:y=20" -f mp4 output.mp4

Related

How do I generate a color screen for the duration of an MP3 in ffmpeg?

I have successfully generated a blue screen to add to an mp3. But, I have always needed to include the length of the clip to match the mp3. When I don't include a timecode it continues to generate footage until I cancel the command.
ffmpeg -f lavfi -i color=blue:s=1920x1080 -i input.mp3 -t 00:02:08 output.mp4
How do I specify that I only want color generated during the length of the mp3 that I am adding?
ffmpeg -i <.jpg> -i <.mp3>
This worked too but I don't want to rely on a jpeg file.
Use -shortest:
ffmpeg -f lavfi -i color=blue:s=1920x1080 -i input.mp3 -shortest output.mp4

using ffmpeg to replace a single frame based on timestamp

Is it possible to CLI ffmpeg to replace a specific frame at a specified interval with another image? I know how to extract all frames from a video, and re-stitch as another video, but I am looking to avoid this process, if possible.
My goal:
Given a video file input.mp4
Given a PNG file, image.png and given its known to occur at exactly a specific timestamp within input.mp4
create out.mp4 with image.png replacing that position of input.mp4
The basic command is
ffmpeg -i video -i image \
-filter_complex \
"[1]setpts=4.40/TB[im];[0][im]overlay=eof_action=pass" -c:a copy out.mp4
where 4.40 is the timestamp of the frame to be replaced.
Note that images default to a framerate of 25fps. Thus, for timestamps that are not multiples of (1/25=)0.04s, the framerate must be specified (eg, to replace frame at timestamp 3.5035 in a 29.97fps video):
ffmpeg -i input.vid -itsoffset 3.5035 -framerate 30000/1001 -i frame.png -filter_complex "[0:v:0][1]overlay=eof_action=pass" output.vid
This technique works just as well for replacing multiple sequential frames (eg, to replace frames starting at 107s in a 12.5fps video):
ffmpeg -i input.mp4 -itsoffset 107 -framerate 25/2 -i '107+%06d.png' -filter_complex "[0:v:0][1]overlay=eof_action=pass" output.mp4
This only works for videos with constant framerates (CFR). For VFR video, I have a separate question.

FFmpeg simple 1:1 overlay and concatenate?

I am using ffmpeg on Ubuntu 14.04 (Jon Severinsson's PPA) and am playing video files out of a folder - one by one.
First question I wasn't able to figure out yet - how can I add a simple overlay - 720p footage with 720p overlay (with partial transparency)? So there is no resize or alignment needed - just the 1:1 overlay. I tried a lot already with -vf and -filter_complex but didn't show up.
Second question - with concatenate, is it possible to have the switches between the files seamless? Best without creating a new file - so, on the fly? I need to reduce the gaps between the file switches or eliminate them completely.
This is my bash right now:
#!/usr/bin/env bash
while :; do
files=(*)
ffmpeg -re -i "${files[$RANDOM % ${#files[#]}]}" -acodec copy -vcodec copy -f flv ServerAddress
done
So I have everything in /vod - the videofiles, as well as the overlay.png
Thanks a bunch in advance,
Tim
For the overlay you need to scale the image to the original source dimensions.
To concat multiple source files that have the same codec use the concat demuxer.
Eg:
Make a playlist.txt with the following format:
file '/path/to/file_1'
file '/path/to/file_2'
file '/path/to/file_3'
[..]
And then:
ffmpeg -f concat -i playlist.txt -i overlay.png -filter_complex "[1:v] scale=1280:720 [ovr];[0:v][ovr] overlay=0:0" ...
If the video and the image are the same size you can just use:
ffmpeg -f concat -i playlist.txt -i overlay.png -filter_complex "[0:v] overlay"
Update:
Full example:
You cannot filter and copy the video stream at the same time!
ffmpeg -re -f concat -i playlist.txt -i overlay.png -filter_complex "[0:v] overlay" -c:v h264 -c:a libfdk_aac -ar 44100 -f flv rtmp://...
If your audio stream is valid and has one of the supported audio rates (44100, 22050, 11025) you can do:
ffmpeg -re -f concat -i playlist.txt -i overlay.png -filter_complex "[0:v] overlay" -c:v h264 -c:a copy -f flv rtmp://...

Ffmpeg video overlay

I am trying to create a video output from multiple video cameras.
Following the example given here Presenting more than 2 videos using FFmpeg
and other similar examples.
but Im getting the error
Output pad "default" for the filter "src" of type "buffer" not connected to any destination
when i run
ffmpeg -i /dev/video1 -i /dev/video0 -filter_complex "[0:0]pad=iw*2:ih[a];[a][1:0]overlay=w[b];[b][2:0]overlay=w:h" -shortest output.mp4
Im not really sure what this means or how to fix it.
Any help would be greatly appreciated!
Thanks.
When using the "padding" option, you have to specify which is the size of the output image and where you want to put the input image
[0:0]pad=iw*2:ih:0:0
tested under windows 7 with file of same size
ffmpeg -i out.avi -i out.avi -filter_complex "[0:0]pad=iw*2:ih:0:0[a];[a][1:0]overlay=w" -shortest output.mp4
and with WebCam Cap (vfwcap) and a still picture (as i have only o=1 WebCam). BTW you can see how to scale one the source to fit in the target (just in case your source have different resolution)
ffmpeg -y -f vfwcap -r 10 -i 0 -loop 1 -i photo.jpg -filter_complex "[0:0]pad=iw*2:ih:0:0[a];[1:0]scale=640:480[b];[a][b]overlay=w" -shortest output.mp4
under Linux:
ffmpeg -i /dev/video1 -i /dev/video0 -filter_complex "[0:0]pad=iw*2:ih:0:0[[a];a][1:0]overlay=w" -shortest output.mp4
if it doesn't work test a simple record of video 1 and after of video 0 and check their properties (type, resolution, fps).
ffmpeg -i /dev/video1 -shortest output1.mp4
ffmpeg -I output1.mp4
If you still have issue, update your question with ffmpeg console output (as text) for video and video 0 capture and also of the call with the overlay

FFmpeg input duration?

With FFmpeg you have the option -t which will set the duration of the output. However I do not see a way to limit the duration of the input. Take this command
ffmpeg -i video.mp4 -c copy -t 60 out.mp4
This simply creates a 60 second clip of the original video. However if I wanted to clip the audio while keeping the full video stream, FFmpeg does not seem to have an option for this.
I have tried simply clipping the audio first, then combining the clipped audio with the video file, but this causes video/audio sync issues for me.
‘-aframes number (output)’
Set the number of audio frames to record. This is an alias for -frames:a.
§ Audio Options
ffmpeg -i video.mp4 -c copy -aframes 100 out.mp4
Use the "-itsoffset" option.
This makes the first 10 seconds mute.
ffmpeg -i video.mp4 -vn -acodec copy -ss 10.0 out_audio.mp4
ffmpeg -itsoffset 10.0 -i out_audio.mp4 -i video.mp4 -vcodec copy -acodec copy out.mp4

Resources