Detect scene change on part of the frame - ffmpeg

I have a video file of an online lecture cosisting of a slideshow with audio in the background.
I want to save images of each slide as well as the timestamp of that slide.
I do this using the scene and metadata filters:
ffmpeg -i week-01.mp4 -filter_complex "select='gt(scene,0.011)',metadata=print:file=frames/time.txt" -vsync vfr frames/img%03d.jpg
This works fine exept for one thing, there is a timer onscreen on the right in the video file.
If i set the thershold small enough to pick up all the slide changes, it also picks up the timer changes.
So here is my question, Can I ask ffmpeg to:
analize part of the frame (only the right side till roughly 75% to the left).
Then, on detecting a scene change in this area, save the entire frame and the timestamp.
I though of making a script that
crops the video and saves it alongside the origional
analize the cropped video for scene changes and save the timestamps
extract the frames from the origional video using the timestamps
Is there a better/faster/shorter way to do this?
Thanks in advance!

You can do it in one command like this,
ffmpeg -i week-01.mp4 -filter_complex "[0]split=2[full][no_timer];[no_timer]drawbox=w=0.25*iw:h=ih:x=0.75*iw:y=0[no_timer];[no_timer]select='gt(scene,0.011)',metadata=print:file=frames/time.txt[no_timer];[no_timer][full]overlay" -vsync vfr frames/img%03d.jpg
Basically, make two copies of the video, use drawbox on one copy to paint solid black over the quarter of the screen on the right, analyze scene change and record scores to file; then overlay the full unpainted frame on top of the painted ones. Due to how overlay syncs frames, only the full frames with corresponding timestamps will be used to overlay on top of the base selected frames.

Related

Use debug log from ffmpeg mpdecimate to remove frames of another input file

I have a security cam footage, which I'd like to remove all frames from which are don't contain any change. I followed the question and answer on Remove sequentially duplicate frames when using FFmpeg
But my footage has a timestamp as part of the picture, so even if the image itself doesn't change, the timestamp still changes every second.
My idea to ignore the timestamp for the mpdecimate option on ffmpeg:
The original file is called security_footage.mp4
Crop the timestamp away, using IMovie, creates file: security_footage_cropped.mp4
Run ffmpeg -i security_footage_cropped.mp4 -vf mpdecimate -loglevel debug -f null - > framedrop.log 2>&1 to get a log of all frames that are to drop from the file
Somehow apply the log framedrop.log of security_footage_cropped.mp4 to the original file security_footage.mp4.
Question1: Anyone has a good idea how I could do number 3? Apply the mpdecimate filter log onto another video file.
If not, anyone has a good idea how to run mpdecimate with ignoring the timestamp in the top left corner?
Thanks in advance for any help!
I would suggest the following method:
clone the video stream
in one copy, black out the region with the timestamp
run mpdecimate on it.
overlay the other copy on the first one. overlay filter syncs with the first input, so the full clone is only seen when a frame exists for the base input.
ffmpeg -i security_footage.mp4 -vf "split=2[full][crop];[crop]drawbox=200:100:60:40:t=fill,mpdecimate[crop];[crop][full]overlay" out.mp4
A 200x100 black box is drawn with top-left corner at (60,40) from top-left corner of frame.

why zoom effects only applies to first image?

why zoom effects only applies to first image ?
ffmpeg -i img%03d.jpeg -i 1.mp3 -vf
zoompan=z='zoom+0.002':d=25*5:s=1280x800 -pix_fmt yuv420p -c:v libx264
-t 01:05:00 out12345.mp4
I have 3 images, 1 audio, and I am trying to create a video and expecting each image to have zoom effects.
Here is what I am getting, First image shows zoom effect then 2nd image shows up for a split second and then last image stays without any effect.
What am I doing wrong ?
The zoompan filter operates per frame, so normally the command should produce the desired result i.e. each frame gets zoomed in over 125 frames.
However, when an image in the stream has different properties, the filtergraph is reinitialized, so a new zoompan instance is created, which starts on the changed frame as if starting from scratch. This new set of output has the same timestamps as already output frames so they are dropped.
There are two workarounds to prevent reinitialization:
1) make sure all frames in the input are uniform in properties
or
2) forcibly prevent reinitialization by adding -reinit_filter 0 before the input. Only a few filters can handle frames with changing properties, so avoid doing this unless you are sure.

generate thumbnail from the middle of every scenes changes in a video using ffmpeg or other software

Is there a way to generate thumbnails from scene changes using ffmpeg? Instead of picking the I-Frames but the middle of two I-Frames? This is assuming the middle of two major changes is usually the best time to take thumbnail shot from.
Yes, there is a way!
Execute this command:
ffmpeg -i yourvideo.mp4 -vf
select="eq(pict_type\,I)" -vsync 0 -an
thumb%03d.png
It is really simple. ffmpeg algorithms will do all the work and generate thumbnails from the middle of every scene change.
The items on bold:
yourvideo.mp4 (you need to change this to your video input)
thumb%03d.png (This is your output)

ffmpeg creating multiple output videos, splitting on gt(scene,x)

I want to split one video up into multiple parts based on detecting the first frame of each shot, by select scene in ffmpeg.
The following entry records the scene frames and creates a photo mosaic out of them. This indicates to me that the select portion is functional, but I want to use this to create many separate videos, each scene it's own video file.
ffmpeg -i video.mpg -vf select='gt(scene\,0.2331)','scale=320x240,tile=1x100' -frames:v preview.png
Thank you. I think I am close, and I am open to any solution.
You should definitely use -ss(stream start time) and -t(number of second of video from the the start time) options, Can you get the time for each of these scene frames? Then you are good to go.

FFmpeg fade effects between frames

I want to create a slideshow of my images with fade in & fade out transitions between them and i am using FFmpeg fade filter.
If I use command:
ffmpeg -i input.mp4 "fade=in:5:8" output.mp4
To create the output video with fade effect, then it gives output video with first 5 frames black and than images are shown with fade in effect but i want fade:in:out effect between frame change.
How can i do that?
Please tell a solution for Centos server because i am using FFmpeg on this server only
To create a video with fade effect, just break the video into parts and create separate videos for each image. For instance, if you have 5 images then firstly, create 50-60 copies of each image and obtain a video for that:
$command= "ffmpeg -r 20 -i images/%d.jpg -y -s 320x240 -aspect 4:3 slideshow/frame.mp4";
exec($command." 2>&1", $output);
This will allow you to create 5 different videos. Then, you need 10-12 different copies of those five images and again create separate videos with fade effects.
ffmpeg -i input.mp4 "fade=in:5:8" output.mp4
After this you will have videos like: video for image 1 and its fade effect then for image 2 and its fade effect and so on. Now combine those videos in respective order to get the whole video.
For combining the videos you need:
$command = "cat pass.mpg slideshow/frame.mpg > final.mpg";
This means to join the videos using cat and then you need to convert them to mpg, join them and again reconvert them to mp4 or avi to view them properly. Also the converted mpg videos will not be proper so do not bother. When you convert them to mp4, it will be working fine.
You can make a slideshow with crossfading between the pictures, by using the framerate filter. In the following example 0.25 is the framerate used for reading in the pictures, in this case 4 seconds for each picture. The parameter fps sets the output framerate. The parameters interp_start and interp_end can be used for changing the fading effect: interp_start=128:interp_end=128 means no fading at all. interp_start=0:interp_end=255 means continuous fading. When one picture has faded out and the next picture has fully faded in, the third picture will immediately begin to fade in. There is no pause for showing the second picture. interp_start=64:interp_end=191 means half of the time is pause for showing the pictures and the other half is fading. Unfortunately it won't be a full fading from 0 to 100%, but only from 25% to 75%. That's not exactly what you might want, but better than no fading at all.
ffmpeg -framerate 0.25 -i IMG_%3d.jpg -vf "framerate=fps=30:interp_start=64:interp_end=192:scene=100" test.mp4
You can use gifblender to create the blended, intermediary frames from your images and then convert those to a movie with ffmpeg.

Resources