ffmpeg creating multiple output videos, splitting on gt(scene,x) - ffmpeg

I want to split one video up into multiple parts based on detecting the first frame of each shot, by select scene in ffmpeg.
The following entry records the scene frames and creates a photo mosaic out of them. This indicates to me that the select portion is functional, but I want to use this to create many separate videos, each scene it's own video file.
ffmpeg -i video.mpg -vf select='gt(scene\,0.2331)','scale=320x240,tile=1x100' -frames:v preview.png
Thank you. I think I am close, and I am open to any solution.

You should definitely use -ss(stream start time) and -t(number of second of video from the the start time) options, Can you get the time for each of these scene frames? Then you are good to go.

Related

Detect scene change on part of the frame

I have a video file of an online lecture cosisting of a slideshow with audio in the background.
I want to save images of each slide as well as the timestamp of that slide.
I do this using the scene and metadata filters:
ffmpeg -i week-01.mp4 -filter_complex "select='gt(scene,0.011)',metadata=print:file=frames/time.txt" -vsync vfr frames/img%03d.jpg
This works fine exept for one thing, there is a timer onscreen on the right in the video file.
If i set the thershold small enough to pick up all the slide changes, it also picks up the timer changes.
So here is my question, Can I ask ffmpeg to:
analize part of the frame (only the right side till roughly 75% to the left).
Then, on detecting a scene change in this area, save the entire frame and the timestamp.
I though of making a script that
crops the video and saves it alongside the origional
analize the cropped video for scene changes and save the timestamps
extract the frames from the origional video using the timestamps
Is there a better/faster/shorter way to do this?
Thanks in advance!
You can do it in one command like this,
ffmpeg -i week-01.mp4 -filter_complex "[0]split=2[full][no_timer];[no_timer]drawbox=w=0.25*iw:h=ih:x=0.75*iw:y=0[no_timer];[no_timer]select='gt(scene,0.011)',metadata=print:file=frames/time.txt[no_timer];[no_timer][full]overlay" -vsync vfr frames/img%03d.jpg
Basically, make two copies of the video, use drawbox on one copy to paint solid black over the quarter of the screen on the right, analyze scene change and record scores to file; then overlay the full unpainted frame on top of the painted ones. Due to how overlay syncs frames, only the full frames with corresponding timestamps will be used to overlay on top of the base selected frames.

Scalable solution for converting an image sequence to video

We are developing a stop motion app for kids and schools.
So what we have is:
A sequence of images and audio files (no overlapping audio, in v1. But there can be gaps between them)
What we need to do:
Combine the images to a video with a frame rate between 1-12 fps
Add multiple audio files at a given start times
encode with H265 to mp4 format
I would really like to avoid maintaining a VM or Azure batch jobs running ffmpeg jobs if possible.
Is there any good frameworks or third party APIs?
I have only found transloadit as the closes match but they don't have the option to add multiple audio files.
Any suggestions or experience in this area is very appreciated.
You've mentionned FFmpeg in your tag and it is a tool that checks all the boxes.
For the first part of your project (making a video from images) you should check this link. To sum up, you'll use this kind of command:
ffmpeg -r 12 -f image2 -i PATH_TO_FOLDER/frame%d.png PATH_TO_FOLDER/output.mp4
-r 12 being your framerate, here 12. You control the output format with the file extension. To control the video codec check out this link, you'll need to use the option -c:v libx265before the filename of your output.
With FFmpeg you add audio as you add video, with -i option followed by your filename. If you want to cut audio you should seek in your audio with -ss -t two options good for that. If you want and audio to start at a certain point, check out -itoffset, you can find a lot of examples.

Blank out video frames for last two minutes of MP4 (while keeping audio intact)

I am trying to make a copy of one of my mp4 movies with audio intact, but blacking out video frames only on the last few minutes. Basically I want to keep the end credit music but without the artifacted video.
I found this answer: which works perfectly for an entire mp4 file (including a test fragment I made of the above ending credits sequence), but I need it applied as I stated above to just the end of the entire copied full mp4.
In this case I don't want to start blanking the video stream frames until after 2h 7m 30s. I messed around with combinations of the -ss, -start_time and -timecode 02:07:31 params, but I'm an ffmpeg noob and couldn't get it to produce anything but cut-out sections or the whole copy blanked.
Any guidance would be greatly appreciated!
You can use the drawbox filter to black those frames out.
ffmpeg -i in -vf drawbox=c=black:t=fill:enable='gt(t\,7650)' -c:a copy out
This will black out the frames from 7650 seconds onwards.

generate thumbnail from the middle of every scenes changes in a video using ffmpeg or other software

Is there a way to generate thumbnails from scene changes using ffmpeg? Instead of picking the I-Frames but the middle of two I-Frames? This is assuming the middle of two major changes is usually the best time to take thumbnail shot from.
Yes, there is a way!
Execute this command:
ffmpeg -i yourvideo.mp4 -vf
select="eq(pict_type\,I)" -vsync 0 -an
thumb%03d.png
It is really simple. ffmpeg algorithms will do all the work and generate thumbnails from the middle of every scene change.
The items on bold:
yourvideo.mp4 (you need to change this to your video input)
thumb%03d.png (This is your output)

Adding a color filter at specific intervals in ffmpeg

I am looking to add the color filter to a rtmp stream in ffmpeg at specific time intervals, say for 10 seconds every 10 seconds. I have tried two approaches. The first:
-vf "color=#8EABB8#0.9:480x208,select='gte(t,10)*lte(t,20)' [color];[in][color] overlay [out]"
This streams only the 10 seconds indicated by the select and applies the color filter rather than playing the whole stream and applying the filter to just those 20 seconds.
I then learnt about split and fifo and tried this approach:
-vf "[in] split [no-color], fifo, [with-color] overlay [out]; [no-color] fifo, select='gte(t,10)*lte(t,20)' [with-color]"
I would expect this to play the entire stream, and then select the 10 seconds specified so that I can apply filters, but it does the same as first approach and just plays the 10 seconds selected rather than the entire stream.
Thanks in advance.
You changed the order of the streams going into the overlay.
It seems that if a "select"ed stream goes as first input to the overlay filter, overlay also blanks out its output in the non-selected times.
But if you first provide a stable stream to overlay and then the selected, it will output a stream for the whole time.
I tried following set of filters:
-vf "[in]split[B1][B2];[B1]fifo,drawbox=-1:-1:5000:5000:invert:2000,vflip,hflip[B1E];[B2]fifo,select='gte(t,5)'[B2E];[B1E][B2E]overlay[out]"
My version as graph:
_,--[B1]--fifo--drawbox--flip--[B1E]--._
[in]---split--X X--overlay--[out]
‾'--[B2]--fifo--select---------[B2E]--'‾
Your version was (the select filter is the first overlay input!!):
_,--fifo--select---[with-color]--._
[in]---split--X X--overlay--[out]
‾'--[no-color]--fifo-------------'‾
The reason is that
...[B2E];[B1E][B2E]overlay...
and
...,[B1E]overlay...
are equivalent.
But nevertheless there may remain some problems: Do you need the one time or every 10 seconds, e. g.?
As this question discusses, there doesn't appear to be a way to apply video filters to a specific time period of a video stream, short of splitting it into pieces, applying filters, and recombining. Please share if you find a better method.
I'm dealing with similar problem, I used combination of filter slpit, overlay and concat, and it works, you can try it.
-filter_complex "[0:v]split[v1][v2];[v1]select='lt(t,5)',setpts=PTS-STARTPTS[iv1];[v2]select='gte(t,5)'[over];[over][1:v] overlay=W-w:H-h:shortest=1,setpts=PTS-STARTPTS[iv2];[iv1][iv2]concat=n=2:v=1:a=0"
but my problem is, I use gif as second input because it contains transparent color information, but gif file dosn't not contain audios. how can I make a movie with both transparent(or alpha) and audio?

Resources