I have a directory full of *.jpg images which I want to concatenate to a video. This works fine with the concat-video filter:
ffmpeg -f concat -safe 0 -i files.txt -c:v libx264 -pix_fmt yuv420p out.mp4
The file.txt contains the list of absolute pathnames of the images. This list is created by the find-command on linux bash.
Now I want to add a text overlay, where every image shows a text representing the creation date.
I found the drawtext video-filter like in this answer: Text on video ffmpeg
However, I think I cannot set a video-filter per file when using the concat filter; I understand I can only set one filter for the whole ffmpeg-call.
Is there any other way to concatenate the files to a video and add text individually to each image?
EDIT: A trivial solution is to add the text to the images first by iterting the images. This would either irreversably change the images or create copies and thus double the disk space requirement even temporarily. It would be preferable to add the text on-the-fly for each frame so that there is no additional disk space required.
Related
I have a security cam footage, which I'd like to remove all frames from which are don't contain any change. I followed the question and answer on Remove sequentially duplicate frames when using FFmpeg
But my footage has a timestamp as part of the picture, so even if the image itself doesn't change, the timestamp still changes every second.
My idea to ignore the timestamp for the mpdecimate option on ffmpeg:
The original file is called security_footage.mp4
Crop the timestamp away, using IMovie, creates file: security_footage_cropped.mp4
Run ffmpeg -i security_footage_cropped.mp4 -vf mpdecimate -loglevel debug -f null - > framedrop.log 2>&1 to get a log of all frames that are to drop from the file
Somehow apply the log framedrop.log of security_footage_cropped.mp4 to the original file security_footage.mp4.
Question1: Anyone has a good idea how I could do number 3? Apply the mpdecimate filter log onto another video file.
If not, anyone has a good idea how to run mpdecimate with ignoring the timestamp in the top left corner?
Thanks in advance for any help!
I would suggest the following method:
clone the video stream
in one copy, black out the region with the timestamp
run mpdecimate on it.
overlay the other copy on the first one. overlay filter syncs with the first input, so the full clone is only seen when a frame exists for the base input.
ffmpeg -i security_footage.mp4 -vf "split=2[full][crop];[crop]drawbox=200:100:60:40:t=fill,mpdecimate[crop];[crop][full]overlay" out.mp4
A 200x100 black box is drawn with top-left corner at (60,40) from top-left corner of frame.
I'm trying to overlay a video (for example a report) with different other videos or images, like hints to the facebook page or a hint to the website. These other videos or images are smaller than the original and sometimes transparent (rgba).
I already tried to overlay multiple videos, which works pretty well:
ffmpeg -i 30fps_fhd.mp4 -i sample.mp4 -i timer.webm -i logo.jpg -filter_complex "overlay = x=100:y=1000, overlay = x=30:y=66:eof_action=pass, overlay = x=0:y=0" -acodec copy -t 70 out.mp4
But now, I want to start some videos or images not at the beginning of the video, instead after a period of time.
I found flags like 'itsoffset' or 'setpts', but I dont know how to apply them on this 'multiple video / image overlay command'.
LG Bamba
Okay, I found out how it works. I have to take the input with [index:v] apply effects on it (separated with commas) and save it in a variable (add [var-name] at the end of the effect-chain). Then you can apply the overlay effect, by putting two variables (if not edited, then the input variable -> [index:v]) next to each other and write the effect-metadata next to them.
Source video is H264 in an mp4 container, I'm trying to split it into individual encoded frames. I tried with the following command line:
ffmpeg -i "input.mp4" -f image2 "%d.h264"
But that creates jpegs with the extension "h264", rather than actual H.264 frames.
It turns out the correct command line is:
ffmpeg -i "inputfile" -f image2 -vcodec copy -bsf h264_mp4toannexb "%d.h264"
There is no such thing as an "h264" image. H264 is a standard for video compression, and contains many different iterations, profiles, and also proprietary implementations of h264 encoders and decoders.
If you are trying to convert an avi video into an image sequence, you will need to determine what image format you want the exports to be. If you don't want to re-encode the media, you can use the -f image2 argument to specify an uncompressed image format. You can then save the outputs into something like a bmp, png, or tiff container. Alternatively, you can compress the images into something like a .jpg container (which perhaps FFmpeg defaulted to in your original command because you didn't tell it an image container that it understood).
.... edit: If for some reason you are trying to create a sequence of video files that only contain one frame each, it doesn't make any sense to compress them with h264. H264 is a temporally based encoding method and would require more than one frame. You could I guess make a sequence of uncompressed video files that only contain one frame each, but I can't imagine what the purpose for that would be when images would accomplish the same thing
I have a quick time video file, video stream is in motion jpeg format, I extract every frame in the file with
ffmpeg -i a.mov -vcodec copy -f image2 %d.jpg
I found that in every jpeg file, there are actually two FFD8 marker, which means there are actually two images in one single jpeg file.
Is this correct? Is the file interlaced? Anything special need to pass to codec?
Yes, motion Jpeg supports interlaced format. If the jpeg file is half of the full video size, will mean that the mov is interlaced, and you cannot use -vcodec copy to extract the frames. Try ffmpeg -deinterlace or use yadif filter.
I am looking for some tool or program to automatically grab snapshots (a few mins interval) from a video file without display it for me. Mkv support would be nice!
From the ffmpeg manual page http://ffmpeg.org/ffmpeg.html :
For extracting images from a video:
ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
This will extract one video frame per second from the video and will output them in files named ‘foo-001.jpeg’, ‘foo-002.jpeg’, etc. Images will be rescaled to fit the new WxH values.
There are other options for extracting images depending on frame position or time, etc.