i'm quite new to ffmpeg and i've been learning how to make a 5 hour videos which loops for 5 hours and also add a intro video in the start of the video and at a outro/endscreen video at the end. I would like to know how i could do it more efficiently since i feel it's really taking a long time and too much work. My process and command's are as follows:
video specs: 3840x2160 , 60fps , 18M bitrate, hevc265
First i create a video of intro and outro/endscreen and save them as .mp4 format(both are 10secs long) using adobe premier.
i create a audio using adobe audition for the x amount of duration i want it to play, in this case 5 hours
I create the main video(which is 10 secs long) which i want to make it loop for 5 hours or any amount of time i want(this takes alot of time and is a huge file size and also adding name of the videos in the loop.txt etc video1.mp4, video2.mp4 till i reach to 5 hours), i use the command:
ffmpeg -f concat -i loop.txt -c copy main5hours.mp4
Than i concat the intro,main,outro/endscreen videos together using:
ffmpeg -f concat -i files.txt -c copy videowithoutaudio.mp4
Than merge video and audio using:
ffmpeg -i videowithoutaudio.mp4 -i audio.flac -c:v copy -c:a copy finalvideo.mp4
So this process takes a lot of time and file size and want some advice how i could do this more efficiently saving myself both time and file size and less work.
I read that i could also overlay the intro and endscreen videos but not sure how to do them and will it make any difference?
p.s: i'll be using the same intro and endscreen videos for all other videos i'll be making. Thanks in advance for any advice and help.
Related
I am using FFmpeg to extract a screenshot through the timestamp, but I get this timestamp manually by watching the video in VLC and looking for the exact moment of the thumbnail was generated, this process is very time consuming and I need to do it with 220 videos.
All this in order to get a high resolution image of the thumbnail, I also have to mention that the thumbnail file does not have the timestamp in the metadata and in the title.
Would there be any way for FFmpeg to give me the exact timestamp where the thumbnail was taken?
UPDATED
After a couple of hours testing with FFmpeg commands I found the solution, it is not completely automatic but it works, then the command is:
ffmpeg -ss 00:02:30 -i video.mp4 -t 00:00:40 -loop 1 -i thumbnail.jpg \
-filter_complex "scale=480:270,hue=s=0,blend=difference:shortest=1, \
blackframe=95:30,fps=fps=23" -f null -
Options to modify:
"video.mp4" replace for the video file (obviously).
"thumbnail.jpg" replace for the thumbnail file.
"-ss" and "-t" are the range of time where the thumbnail likely to be.
"-ss" time start 00:02:30 (2min with 30 sec)
"-t" time since start 00:00:40 (2min with 30sec + 40sec)
If you have no idea where probably is the thumbnail, you can delete this part, only it will take longer to find it.
"480:270" replace for size of the thumbnail.
"fps=23" change the 23 for the fps exact of the "video.mp4" file.
And answer we have:
[Parsed_blackframe_1] frame:3849 pblack:100 pts:160535 t:160.535000
In this example, we can see that the command has given us the exact timestamp where the thumbnail was generated "160.535000" which is in seconds with microseconds.
Now to extract the thumbnail in high resolution we could use the found timestamp, but consider that it would be more exact and precise to use the frame number, which in this case would be "frame:3849".
Using this command, we obtain the exact image:
ffmpeg -i video.mp4 -vf "select=gte(n\, 3849)" -vframes 1 high_resolution.png
Well I hope this is helpful for someone who is looking for the original image of a thumbnail or in general who needs to know exactly the minute where it was taken.
If someone in the future likes to make a script that can fully automate this process, I would be grateful :)
Users of my app upload videos to my server and I process them to create different qualities, thumbnails and gifs etc. Which are then useful for mobile and web apps. It takes almost 15-20 minutes for each video to be processed. I am using ffmpeg. How can I reduce my processing time ?
I can't comment so I ask here.
15-20 is to make thumbnail/gif from a video? If so, that's awfully a lot.
If you want HQ lossless then consider using x264 encoder with lossless_ultrafast preset to make videos.
ffmpeg -f x11grab -r 25 -s 1080x720 -i :0.0 -vcodec libx264 -vpre ultrafast yourfile.mkv
If possible, use GPU to convert.
I might be wrong, but FFmpeg by default uses 1 thread. You could put multiple instances running to solve it.
I'm encoding videos by scenes. At this moment I got two solutions in order to do so. The first one is using a Python application which gives me a list of frames that represent scenes. Like this:
285
378
553
1145
...
The first scene begins from the frame 1 to 285, the second from 285 to 378 and so on. So, I made a bash script which encodes all this scenes. Basically what it does is to take the current and previous frames, then convert them to time and finally run the ffmpeg command:
begin=$(awk 'BEGIN{ print "'$previous'"/"'24'" }')
end=$(awk 'BEGIN{ print "'$current'"/"'24'" }')
time=$(awk 'BEGIN{ print "'$end'"-"'$begin'" }')
ffmpeg -i $video -r 24 -c:v libx265 -f mp4 -c:a aac -strict experimental -b:v 1.5M -ss $begin -t $time "output$count.mp4" -nostdin
This works perfect. The second method is using ffmpeg itself. I run this commands and gives me a list of times. Like this:
15.75
23.0417
56.0833
71.2917
...
Again I made a bash script that encodes all these times. In this case I don't have to convert to times because what I got are times:
time=$(awk 'BEGIN{ print "'$current'"-"'$previous'" }')
ffmpeg -i $video -r 24 -c:v libx265 -f mp4 -c:a aac -strict experimental -b:v 1.5M -ss $previous -t $time "output$count.mp4" -nostdin
After all this explained it comes the problem. Once all the scenes are encoded I need to concat them and for that what I do is to create a list with the video names and then run the ffmpeg command.
list.txt
file 'output1.mp4'
file 'output2.mp4'
file 'output3.mp4'
file 'output4.mp4'
command:
ffmpeg -f concat -i list.txt -c copy big_buck_bunny.mp4
The problem is that the "concated" video is longer than the original by 2.11 seconds. The original one lasts 596.45 seconds and the encoded lasts 598.56. I added up every video duration and I got 598.56. So, I think the problem is in the encoding process. Both videos have the same frames number. My goal is to get metrics about the encoding process, when I run VQMT to get the PSNR and SSIM I get weird results, I think is for this problem.
By the way, I'm using the big_buck_bunny video.
The probable difference is due to the copy codec. In the latter case, you tell ffmpeg to copy the segments, but it can't do that based on your input times.
It has to find first the previous I frames (a frame that can be decoded without any reference to any previous frame) and starts from here.
To get what you need, you need to either re-encode the video (like you did in the 2 former examples) or change the times to stop at I frames.
To assert I getting your issue correctly:
You have a source video (that's encoded at variable frame rate, close to 18fps)
You want to split the source video via ffmpeg, by forcing the frame rate to 24 fps.
Then you want to concat each segment.
I think the issue is mainly that you have some discrepancy in the timing (if I divide the frame index by the time you've given, I getting between 16fps to 18fps). When you are converting them in step 2, the output video segment time will be 24fps. ffmpeg does not resample in the time axis, so if you force a video rate, the video will accelerate or slow down.
There is also the issue of consistency for the stream:
Typically, a video stream must start with a I frame, so when splitting, FFMPEG has to locate the previous I frame (when using copy codec, and this changes the duration of the segment).
When you are concatenating, you could also have the issue of consistency (that is, if the segment you are concatenating does end with a I frame, and the next one starts with a I frame, it's possible FFMPEG drops either one, although I don't remember what is the current behavior now)
So, to solve your issue, if I were you, I would avoid step 2 (it's bad for quality anyway). That is, I would use ffmpeg to split the segments of interest based on the frame number (that's the only value that's not approximate in your scheme) in png or ppm frames (or to a pipe if you don't care about keeping them) and then concat all the frames by encoding them at the last step with the expected rate set to totalVideoTime / totalFrameCount.
You'll get a smaller and higher quality final video.
If you can't do what I said for whatever reason, at least for the concat input, you should use the ffconcat format:
ffconcat version 1.0
file segment1
duration 12.2
file segment2
duration 10.3
This will give you the expected duration by cutting each segment if it's longer
For selecting by frame number (instead of time as time is hard to get right on variable frame rate video), you should use the select filter like this:
-vf select=“between(n\,start_frame_num\,end_frame_num),setpts=STARTPTS"
I suggest checking the input and output frame rate and make sure they match. That could be a source of the discrepancy.
I have two time lapse videos with a rate of 1 fps. The camera took 1 Image every minute. Unfortunately it was missed to set the camera to burn/print on every image the time and date. I am trying to burn the time and date afterwards into the video.
I decoded with ffmpeg the two .avi files into ~7000 single images each and wrote a R script that renamed the files into their "creation" date (the time and date the pictures where taken). Then i used exiftoolto write those information "into" the file, into their exif data or meta data or whatever this is called.
The final images in the folder are looking like this:
2018-03-12 17_36_40.png
2018-03-12 17_35_40.png
2018-03-12 17_34_40.png
...
Is it possible to create a Video from these images again with ffmpeg or similiar with a "timestamp" in the video so you can see while watching a time and date stamp in the video?
I think this can be done in two steps.
First you create a mp4 file with the timestamp for every picture. This is a batch file which creates such video files.
#echo off
set "INPUT=C:\t\video"
for %%a in ("%INPUT%\*.png") do (
ffmpeg -i "%%~a" -vf "drawtext=text=%%~na:x=50:y=100:fontfile=/Windows/Fonts/arial.ttf:fontsize=25:fontcolor=white" -c:v libx264 -pix_fmt yuv420p "output/%%~na.mp4"
)
This will create mp4 for every png picture in the directory output/.
Explaining
For cycle is there loop via all *.png files and create *.mp4 files
The text is added via text overlay. It adds the filename without suffix via batch %%~na.
The text added here is the filename without the .png suffix.
Font used is arial.ttf (feel free to place any you want)
Next have x and y coordinates where you want to place the text
libx264 what x264 is used to encode
-pix_fmt yuv420p is for crappy players to be able to play it
Second step is to concate h.264 together using concat demuxer:
You need to create a file list like file_list.txt
file '2018-03-12 17_34_40.mp4'
duration 10
file '2018-03-12 17_35_40.mp4'
duration 10
file '2018-03-12 17_36_40.mp4'
duration 10
...
Examples can be found here.
Then you simply concat all the *.mp4 files - run in the output subdirectory:
ffmpeg -safe 0 -f concat -i file_list.txt -c copy output.mp4
Which will create one concat output.mp4 file.
If I understand correctly, you have some number of YUV frames and its information saved separately.
For example:
Let's say you originally have 10seconds video at 24 frames per seconds (constant, not variable).
So you have 240 yuv frames. In this case or similar, you can generate a video file via ffmpeg with a container format like mp4, with the resolution and frame rate information. So you will not need any metadata to make it back to video or to play, it will play normally in any decent player.
If you have only KEY FRAMES with different frame timing between them, yes you will need the metadata information and I'm not sure how can you do that.
Since you have the source, you can extract every single frame (24 frames in this example) per second and you are good to go. Similar answers already given for these, just look around.
Hope that helps.
I use ffmpeg to save on files an RTSP stream at 15 fps. The command is similar to this (I've simplified it):
ffmpeg -y -i rtsp://IP/media.amp -c copy -r 15 -f segment -segment_time 60 -reset_timestamps 1 -segment_atclocktime 1 -strftime 1 outputFile%Y-%m-%d_%H-%M-%S.mp4
It basically creates 1 minute long files from the stream, but the problem is that the framerate of every segmented file is NEVER 15fps.
The values that I get are something like this.
14.99874
15.00031
This is a huge problem for me because I need to merge these files with other 15fps videos and the result is not good. The merged file is unstable, the image crashes and sometimes even VLC crashes if I randomly click on the time bar.
If I just merge the stream files all is well, when I try it to mix it with something else, there is nothing I can do to have a video file that is watchable and stable.
Is this normal? What can I do to have segments with a fixed 15fps without re-encoding?
Thanks in advance.
As Mulvya pointed out, ffmpeg truncates the last frame.
There are two ways to solve this:
1) Save the files to another container other than mp4, it can be TS
2) Remove the last frame of the video also works but you have to use a filter which means re-encoding which can be long and heavy on the cpu/ram