Concatenating/Splicing overlapping video clips with ffmpeg - ffmpeg

I'm trying to concatenate multiple short .mp4 video clips from a security camera. The camera records short clips, with a few seconds on either end of a timespan when motion is detected. For example, two minutes of video will often be broken up into four ~35 second clips, with the first/last few seconds of each clip being duplicative of the last/first few seconds of the previous/next clip.
I simply concatenate the clips together using the ffmpeg concat demuxer, as described here: How to concatenate two MP4 files using FFmpeg?, with
(echo file 'first file.mp4' & echo file 'second file.mp4' )>list.txt
ffmpeg -safe 0 -f concat -i list.txt -c copy output.mp4
Or else I transcode them into intermediate MPEG-2 transport streams, which I can then concatenate with the file-level concat protocol, as described here: https://trac.ffmpeg.org/wiki/Concatenate#protocol, with
ffmpeg -i "first file.mp4" -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate1.ts
ffmpeg -i "second file.mp4" -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate2.ts
ffmpeg -i "concat:intermediate1.ts|intermediate2.ts" -c copy -bsf:a aac_adtstoasc output.mp4
But either way, the resulting video (output.mp4) jumps backward in time a few seconds every half-minute or so because of the duplicated frames.
I want to throw out the duplicate frames, and tie the clips together based on timestamps to achieve smooth playback of the concatenated full-length video. I'd strongly prefer to do this on Windows with ffmpeg if possible. Surely this has been done before, right? Are there timestamps in the .mp4 files that I can use to determine how much overlap there is, and then splice at the proper point-in-time? And if so, how do I read them, how do I splice at an exact point in time, and how do I get around the KeyFrames issue if I can splice at the exact point in time?

Related

Using ffmpeg, jpg to mp4 to mpegts, play with HLS M3U8, only first TS file plays - why?

Before posting I have searched and found similar questions on stackoverflow (I list some below) - none have helped me towards a solution, hence this post. The duration that each image is shown within the movie file differs from many posts that I have seen thus far.
A camera captures 1 image every 30 seconds. I need stream them, preferably via HLS, thus I wrap 2 images in an MP4. I then convert MP4 to mpegts. Each MP4 and TS file play fine individually (each contain two images, each image transitions after 30seconds, each movie file is 1minute long).
When I reference the two TS files in an M3U8 playlist, only the first TS file gets played. Can anyone advise why it stops and how I can get it to play all the TS files that I expect to create, not just the first TS file? Besides my ffmpeg commands, I also include my VLC log file (though I expect to stream to Firefox/Chrome clients). I am using ffmpeg 4.2.2-static installed on an AWS EC2 with AMI2 Linux.
I have four jpgs named image11.jpg, image12.jpg, image21.jpg, image22.jpg - The images look near identical as only the timestamp in top left changes.
The following command creates 1.mp4, using image11.jpg and image12.jpg, each image displayed for 30 seconds, total duration of the mp4 is 1 minute. It plays like expected.
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 1.mp4
I then convert 1.mp4 to an mpegts file, creating 1.ts. It plays like expected.
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 1.ts
I repeat the above steps except specific to image21.jpg and image22.jpg, creating 2.mp4 and 2.ts
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 2.mp4
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 2.ts
Thus now I have 1.mp4, 1.ts, 2.mp4, 2.ts and all four play individually just fine.
Using ffprobe I can confirm their duration is 60seconds, for example:
ffprobe -i 1.ts -v quiet -show_entries format=duration -hide_banner -print_format json
My m3u8 playlist follows:
#EXTM3U
#EXT-X-VERSION:4
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-MEDIA-SEQUENCE:1
#EXT-X-TARGETDURATION:60.000
#EXTINF:60.0000,
1.ts
#EXTINF:60.000,
2.ts
#EXT-X-ENDLIST
Can anyone advise where I am going wrong?
VLC Error Log (though I expect to play via web browser)
I have researched the process using these (and other pages) as a guide:
How to create a video from images with ffmpeg
convert from jpg to mp4 by ffmpeg
ffmpeg examples page
FFMPEG An Intermediate Guide/image sequence
How to use FFmpeg to convert images to video
Take a look at the start_pts/start_time in the ffprobe -show_streams output, my guess is that they all start at zero/near-zero which will cause playback to fail after your first segment.
You can still produce them independently but you will want to use something like -output_ts_offset to correctly set the timestamps for subsequent segments.
The following solution works well for me. I have tested it uninterrupted for more than two hours and believe it ticks all my boxes. (Edited because I forgot the all important -re tag)
ffmpeg will loop continuously, reading test.jpg and stream it to my RTMP server. When my camera posts an image every 30seconds, I copy the new image on top of the existing test.jpg which in effect changes what is streamed out.
Note the command below is all one line, I have put new lines in to assist reading and The order of the parameters are important - the loop and fflags genpts for example must appear before the -i parameter
ffmpeg
-re
-loop 1
-fflags +genpts
-framerate 1/30
-i test.jpg
-c:v libx264
-vf fps=25
-pix_fmt yuvj420p
-crf 30
-f fifo -attempt_recovery 1 -recovery_wait_time 1
-f flv rtmp://localhost:5555/video/test
Some arguments explained:
-re implies play in real time
loop 1 (1 turns the loop on, 0 off)
-fflags +genpts is something I only half understand. PTS I believe is the start/end time of the segment and without this flag, the PTS is reset to zero with every new image. Using this arguement means I avoid EXT-X-DISCONTINUITY when a new image is served.
-framerate 1/30 means one frame for 30seconds
-i test.jpg is my image 'placeholder'. As new images are received via a separate script, it overwrites this image. When combined with loop it means the ffmpeg output will reference the new image.
-c:v libx264 is for H264 video output formating
-vf fps=25 Removing this, or using a different value resulted in my output stream not being 30seconds.
-pix_fmt yuvj420p (sometimes I have seen yuv420p referenced but this did not work on my environment). I believe there are different jpg colour palettes and this switch ensures I can process a wider choice.
-crf 30 implies highest quality image, lowest compression (important for my client)
-f fifo -attempt_recovery 1 -recovery_wait_time 1 -f flv rtmp://localhost:5555/video/test is part of the magic to go with loop. I believe it keeps the connection open with my stream server, reduces the risk of DISCONTINUITY in the play list.
I hope this helps someone going forward.
The following links helped nudge me forward and I share as it might help others to improve upon my solution
Creating a video from a single image for a specific duration in ffmpeg
How can I loop one frame with ffmpeg? All the other frames should point to the first with no changes, maybe like a recusion
Display images on video at specific framerate with loop using FFmpeg
Loop image ffmpeg HLS
https://trac.ffmpeg.org/wiki/Slideshow
https://superuser.com/questions/1699893/generate-ts-stream-from-image-file
https://ffmpeg.org/ffmpeg-formats.html#Examples-3
https://trac.ffmpeg.org/wiki/StreamingGuide

How to extend a video by freezing the last frame without reencoding the whole stream?

I'd like to achieve two things without re-encoding the whole video stream:
Extend a video by freezing the last frame for a given duration.
Extend a video by freezing a frame at a given timestamp for a given duration.
Currently I'm using ffmpeg -i in.mp4 -vf tpad=stop_mode=clone:stop_duration=5 out.mp4 but it requires encoding the whole video stream and only allows freezing the last frame of the stream. To get my desired result I need to split the video into segments, extract the last second of a segment to a separate file (so I re-encode just that part), run the above command on it and then merge all the segments back with concat demuxer.
Is there any better and simpler way to achieve the above?
To 'extend' the the last frame, extend the audio stream by padding it.
ffmpeg -i in.mp4 -c:v copy -af apad -t 5 out.mp4
If there's no existing audio stream, add one
ffmpeg -i in.mp4 -f lavfi -i anullsrc -c:v copy -af apad -t 5 out.mp4
For pausing a frame in the middle with minimal re-encoding , segmenting + concat is indeed the way to go

Can not hear audio when concatenate some mp4 files using FFMPEG [duplicate]

This question already has answers here:
How to add a new audio (not mixing) into a video using ffmpeg?
(10 answers)
Closed 4 years ago.
I need to concatenate some MP4 files. Only one of them has audio. The other ones hasn't.
MyList.txt contains:
file1.mp4 without audio and 5s length
File2.mp4 without audio and 5s length
File3.mp4 without audio and 5s length
File4.mp4 with audio and Ns length
I need an output that cotains the 4 mp4 files and when file4.mp4 starts I want to hear its audio.
If I set the file4.mp4 as the first video to concat, the output video has audio, but If I set the file4.mp4 in another position, the output video hasn't audio.
What I'm doing wrong? What I have to modify in my code?
ffmpeg -f concat -safe 0 -i myList.txt -c:v copy -c:a copy output.mp4
Have you tried generating a silent track for your mp4 files without any sound and then concatenating them?
ffmpeg -i "clip.mp4" -f lavfi -i aevalsrc=0 -shortest -y "new_clip.mp4"
This does the following:
Take clip.mp4 (which is the video clip without audio) (-i "clip.mp4")
Generate the minimum silence required (-f lavfi -i aevalsrc=0
-shortest)
Output the result (-y "new_clip.mp4")
Same problem but asked on stack exchange:
Link
Broader explanation can be found here:
second link

Concat multiple video and audio files with ffmpeg

I have an array of audio and video clips, where each audio clip has a 1:1 correlation with it's video clip. The encoding of each video and each audio clip are the same. How can I concat all of the audio clips, and all the video clips, then merge them together to output a video. As of now I only figured out how to merge 1 audio clip with 1 video clip:
$ ffmpeg -i video_1.webm -i audio_1.wav -acodec copy -vcodec copy output.mkv
Update
I just came across mkvmerge would this possibly be a better option?
If all the files are encoded with the same codecs then it's easy to do. First merge the audio and video files as you have already done so each pair of files is contained in one mkv. Then you can concatenate them with the concat demuxer like this:
ffmpeg -f concat -i <(printf "file '%s'\n" ./file1.mkv ./file2.mkv ./file3.mkv) -c copy merged.mkv
or:
ffmpeg -f concat -i <(printf "file '%s'\n" ./*.mkv) -c copy merged.mkv
You could also list one file per line in a text file called mergelist.txt (or whatever you want to call it), i.e.:
file './file1.mkv'
file './file2.mkv'
file './file3.mkv'
Then use that as the input, a la:
ffmpeg -f concat -i mergelist.txt -c copy merged.mkv
This is by far the easiest and fastest way to do what you want since it won't re-encode the files, just line them up one after another.
You can find your answer here in this old question:
Concatenate two mp4 files using ffmpeg
This answer is not restricted to MP4. But it will depend on the file format you wanna concatenate!
Once you have your new VIDEO file and AUDIO file, to merge them together:
ffmpeg -i AUDIO -i VIDEO -acodec copy -vcodec copy OUTPUT

ffmpeg command to combine audio and images

I'm trying to achieve something which I earlier thought should be a simple task.
Is there a ffmpeg command that can do the following:
convert an audio.wav file to a video,
Add some 100 pics (img%d.png) to the video so they "automatically" stretch to fill the entire length of the video.
I don't want to set the frame rate manually because it's either making the audio go ahead or lack behind.
I also don't want the following to happen, which happenned when I used "loop_input":
A short video of images got created, which played fast and then repeated itself for the entire duration of the audio.
Please let me know the command.
I've currently tried the following, but these are not giving me the desired results:
This one makes, but video goes fast and audio is not full:
ffmpeg -i img%d.png -i audio.wav -acodec copy output.mpg
This one makes short video which repeats for full audio duration:
ffmpeg -loop_input -shortest -i img%d.png -i audio.wav -acodec copy output.mpg
This one works nearly, but "-r 4" makes video go slow and audio goes ahead. If I use "-r 5" then audio goes slow, and video goes ahead:
ffmpeg -r 4 -i img%d.png -i audio.wav -acodec copy -r 30 output.mpg
measure the time of the audio track and then use -t $audio_duration.
this arg, along with "-loop 1" will stop the mp4 at a time that matches the audio.
you might also try 2 pass technique , including -vcodec libx264 as it works well producing mp4.
and think about the following adjusted for your inputs and record rates:
-b:v 200k -bt 50k

Resources