Trying to concatenate very large video files with ffmpeg - ffmpeg

I have about anywhere from 10-24 mp4 files that are about 1 hour in length each and when I try to concatenate them into one I am only left with a 1 hour output.mp4 where its only content is of that of the first 1 hour video I fed in with the audio completely choppy.
Has anyone tried to run ffmpeg to form a very large video file?
This is one of the ffmpeg commands I ran
ffmpeg -f concat -safe 0 -i videos.txt -an -filter:v "setpts=0.01*PTS" total_output.mkv

Related

Using ffmpeg, jpg to mp4 to mpegts, play with HLS M3U8, only first TS file plays - why?

Before posting I have searched and found similar questions on stackoverflow (I list some below) - none have helped me towards a solution, hence this post. The duration that each image is shown within the movie file differs from many posts that I have seen thus far.
A camera captures 1 image every 30 seconds. I need stream them, preferably via HLS, thus I wrap 2 images in an MP4. I then convert MP4 to mpegts. Each MP4 and TS file play fine individually (each contain two images, each image transitions after 30seconds, each movie file is 1minute long).
When I reference the two TS files in an M3U8 playlist, only the first TS file gets played. Can anyone advise why it stops and how I can get it to play all the TS files that I expect to create, not just the first TS file? Besides my ffmpeg commands, I also include my VLC log file (though I expect to stream to Firefox/Chrome clients). I am using ffmpeg 4.2.2-static installed on an AWS EC2 with AMI2 Linux.
I have four jpgs named image11.jpg, image12.jpg, image21.jpg, image22.jpg - The images look near identical as only the timestamp in top left changes.
The following command creates 1.mp4, using image11.jpg and image12.jpg, each image displayed for 30 seconds, total duration of the mp4 is 1 minute. It plays like expected.
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 1.mp4
I then convert 1.mp4 to an mpegts file, creating 1.ts. It plays like expected.
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 1.ts
I repeat the above steps except specific to image21.jpg and image22.jpg, creating 2.mp4 and 2.ts
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 2.mp4
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 2.ts
Thus now I have 1.mp4, 1.ts, 2.mp4, 2.ts and all four play individually just fine.
Using ffprobe I can confirm their duration is 60seconds, for example:
ffprobe -i 1.ts -v quiet -show_entries format=duration -hide_banner -print_format json
My m3u8 playlist follows:
#EXTM3U
#EXT-X-VERSION:4
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-MEDIA-SEQUENCE:1
#EXT-X-TARGETDURATION:60.000
#EXTINF:60.0000,
1.ts
#EXTINF:60.000,
2.ts
#EXT-X-ENDLIST
Can anyone advise where I am going wrong?
VLC Error Log (though I expect to play via web browser)
I have researched the process using these (and other pages) as a guide:
How to create a video from images with ffmpeg
convert from jpg to mp4 by ffmpeg
ffmpeg examples page
FFMPEG An Intermediate Guide/image sequence
How to use FFmpeg to convert images to video
Take a look at the start_pts/start_time in the ffprobe -show_streams output, my guess is that they all start at zero/near-zero which will cause playback to fail after your first segment.
You can still produce them independently but you will want to use something like -output_ts_offset to correctly set the timestamps for subsequent segments.
The following solution works well for me. I have tested it uninterrupted for more than two hours and believe it ticks all my boxes. (Edited because I forgot the all important -re tag)
ffmpeg will loop continuously, reading test.jpg and stream it to my RTMP server. When my camera posts an image every 30seconds, I copy the new image on top of the existing test.jpg which in effect changes what is streamed out.
Note the command below is all one line, I have put new lines in to assist reading and The order of the parameters are important - the loop and fflags genpts for example must appear before the -i parameter
ffmpeg
-re
-loop 1
-fflags +genpts
-framerate 1/30
-i test.jpg
-c:v libx264
-vf fps=25
-pix_fmt yuvj420p
-crf 30
-f fifo -attempt_recovery 1 -recovery_wait_time 1
-f flv rtmp://localhost:5555/video/test
Some arguments explained:
-re implies play in real time
loop 1 (1 turns the loop on, 0 off)
-fflags +genpts is something I only half understand. PTS I believe is the start/end time of the segment and without this flag, the PTS is reset to zero with every new image. Using this arguement means I avoid EXT-X-DISCONTINUITY when a new image is served.
-framerate 1/30 means one frame for 30seconds
-i test.jpg is my image 'placeholder'. As new images are received via a separate script, it overwrites this image. When combined with loop it means the ffmpeg output will reference the new image.
-c:v libx264 is for H264 video output formating
-vf fps=25 Removing this, or using a different value resulted in my output stream not being 30seconds.
-pix_fmt yuvj420p (sometimes I have seen yuv420p referenced but this did not work on my environment). I believe there are different jpg colour palettes and this switch ensures I can process a wider choice.
-crf 30 implies highest quality image, lowest compression (important for my client)
-f fifo -attempt_recovery 1 -recovery_wait_time 1 -f flv rtmp://localhost:5555/video/test is part of the magic to go with loop. I believe it keeps the connection open with my stream server, reduces the risk of DISCONTINUITY in the play list.
I hope this helps someone going forward.
The following links helped nudge me forward and I share as it might help others to improve upon my solution
Creating a video from a single image for a specific duration in ffmpeg
How can I loop one frame with ffmpeg? All the other frames should point to the first with no changes, maybe like a recusion
Display images on video at specific framerate with loop using FFmpeg
Loop image ffmpeg HLS
https://trac.ffmpeg.org/wiki/Slideshow
https://superuser.com/questions/1699893/generate-ts-stream-from-image-file
https://ffmpeg.org/ffmpeg-formats.html#Examples-3
https://trac.ffmpeg.org/wiki/StreamingGuide

Can not hear audio when concatenate some mp4 files using FFMPEG [duplicate]

This question already has answers here:
How to add a new audio (not mixing) into a video using ffmpeg?
(10 answers)
Closed 4 years ago.
I need to concatenate some MP4 files. Only one of them has audio. The other ones hasn't.
MyList.txt contains:
file1.mp4 without audio and 5s length
File2.mp4 without audio and 5s length
File3.mp4 without audio and 5s length
File4.mp4 with audio and Ns length
I need an output that cotains the 4 mp4 files and when file4.mp4 starts I want to hear its audio.
If I set the file4.mp4 as the first video to concat, the output video has audio, but If I set the file4.mp4 in another position, the output video hasn't audio.
What I'm doing wrong? What I have to modify in my code?
ffmpeg -f concat -safe 0 -i myList.txt -c:v copy -c:a copy output.mp4
Have you tried generating a silent track for your mp4 files without any sound and then concatenating them?
ffmpeg -i "clip.mp4" -f lavfi -i aevalsrc=0 -shortest -y "new_clip.mp4"
This does the following:
Take clip.mp4 (which is the video clip without audio) (-i "clip.mp4")
Generate the minimum silence required (-f lavfi -i aevalsrc=0
-shortest)
Output the result (-y "new_clip.mp4")
Same problem but asked on stack exchange:
Link
Broader explanation can be found here:
second link

FFMPEG - concat demuxer with duration filter issue

I am trying to generate video from images using ffmpeg concat demuxer.I am creating a text file with image file path. Since images can be of different duration, i am using duration filter to specify the duration of each image. Sample text file is like :
file 1.jpg
duration 3
file 2.jpg
duration 3
file 3.jpg
duration 5
1.jpg and 2.jpg both are displayed for specified 3 sec each but 3.jpg comes for just 2 seconds.
FFMPEG command:
ffmpeg -f concat -i D:/textfile.txt -y -r 10 -crf 22 -threads 2 -preset veryfast D:/video.mp4
Use
ffmpeg -f concat -i textfile -y -vf fps=10 -crf 22 -threads 2 -preset veryfast video.mp4
where textfile is
file 1.jpg
duration 3
file 2.jpg
duration 3
file 3.jpg
duration 5
file 3.jpg
My usage
I'm facing a similar issue with different ffmpeg command options to concat a list of JPG images to an MP4 video.
FFmpeg version:
ffmpeg version 5.0.1-full_build-www.gyan.dev Copyright (c) 2000-2022 the FFmpeg developers
built with gcc 11.2.0 (Rev7, Built by MSYS2 project)
Command:
ffmpeg -f concat -i "input.txt" -c:v libx264 -r 30 -pix_fmt yuv420p "output.mp4"
input.txt:
file '1.jpg'
duration 5
file '2.jpg'
duration 5
Reference: https://shotstack.io/learn/use-ffmpeg-to-convert-images-to-video/
The problem I'm facing
The resulting video only shows the 1st image for 5 seconds, but at the end of the video, I can see like a frame of the 2nd image.
From what I have tested so far, the issue only happens when I have 1 or 2 images. For 3 or more images, I get the expected result. For 3 images and duration 5 applied to all 3 files, the output duration is 14.967000 seconds (close to the expected duration of 15 seconds).
Findings
I can see from the below FFmpeg defect ticket that this is a known bug in the concat demuxer.
https://trac.ffmpeg.org/ticket/6128
Resolution
Whoever getting such kind of weird issues, no matter it's a misuse or bug, I think all we can do before it gets fixed in a future release is that, we need to do some hacks.
As Gyan commented below the question, you need to add one more file to the end of the input text file.
I tried but I'm not able to get the expected duration just like what OP replied Gyan.
Then for me, I'll just make the input file like this to just convert images to videos one by one:
file '1.jpg'
duration 5
file '1.jpg'
I get an output duration of 5.034000 seconds.
Then I'll just repeat the same process for the 2nd image, and concat the 2 videos with another ffmpeg command.
ffmpeg -safe 0 -f concat -i "concat_input.txt" -c copy "output.mp4"
concat_input.txt:
file '1.mp4'
file '2.mp4'
The output duration is 10.068000, very close to what I'm expecting.
Other info
The command to check video duration is:
ffprobe output.mp4 -show_entries format=duration -v 0

How to Loop Input video x number of time using FFMPEG?

I want to loop same video 4 times and output as video using ffmpeg.
SO I create code like this in ffmpeg.
ffmpeg -loop 4 -i input.mp4 -c copy output.mp4
but when i run it it give the error like this.
Option Loop Not Found.
how to do this withour error. Please Help Me
In recent versions, it's
ffmpeg -stream_loop 4 -i input.mp4 -c copy output.mp4
Due to a bug, the above does not work with MP4s. But if you wrap to a MKV, it works for me.
ffmpeg -i input.mp4 -c copy output.mkv
then,
ffmpeg -stream_loop 4 -i output.mkv -c copy output.mp4
I've found an equivalent work-around with input concatenation for outdated/bugged versions -stream_loop:
ffmpeg -f concat -safe 0 -i "video-source.txt" -f concat -safe 0 -i "audio-source.txt" -c copy -map 0:0 -map 1:0 -fflags +genpts -t 10:00:00.0 /path/to/output.ext
This will loop video and audio independently of each other and force-stop the output at 10 hour mark.
Both text files consist of
file '/path/to/file.ext'
but you must make sure to repeat this line enough times to keep the output satisfied.
For example, if your total video time is less than total audio time then video output will stop earlier than intended and the audio will keep playing until either -t 10H is reached or audio ends prematurely.

Add multiple audio files to video at specific points using FFMPEG

I am trying to create a video out of a sequence of images and various audio files using FFmpeg. While it is no problem to create a video containing the sequence of images with the following command:
ffmpeg -f image2 -i image%d.jpg video.mpg
I haven't found a way yet to add audio files at specific points to the generated video.
Is it possible to do something like:
ffmpeg -f image2 -i image%d.jpg -i audio1.mp3 AT 10s -i audio2.mp3 AT 15s video.mpg
Any help is much appreciated!
EDIT:
The solution in my case was to use sox as suggested by blahdiblah in the answer below. You first have to create an empty audio file as a starting point like that:
sox -n -r 44100 -c 2 silence.wav trim 0.0 20.0
This generates a 20 sec empty WAV file. After that you can mix the empty file with other audio files.
sox -m silence.wav "|sox sound1.mp3 -p pad 0" "|sox sound2.mp3 -p pad 2" out.wav
The final audio file has a duration of 20 seconds and plays sound1.mp3 right at the beginning and sound2.mp3 after 2 seconds.
To combine the sequence of images with the audio file we can use FFmpeg.
ffmpeg -i video_%05d.png -i out.wav -r 25 out.mp4
See this question on adding a single audio input with some offset. The -itsoffset bug mentioned there is still open, but see users' comments for some cases in which it does work.
If it works in your case, that would be ideal:
ffmpeg -i in%d.jpg -itsoffset 10 -i audio1.mp3 -itsoffset 15 -i audio2.mp3 out.mpg
If not, you should be able to combine all the audio files with sox, overlaying or inserting silence to produce the correct offsets and then use that as input to FFmpeg. Not as convenient, but guaranteed to work.
One approach I can think of is to create your audio file for the whole duration of the video first and then mux the audio with the video file

Resources