I use ffmpeg to save on files an RTSP stream at 15 fps. The command is similar to this (I've simplified it):
ffmpeg -y -i rtsp://IP/media.amp -c copy -r 15 -f segment -segment_time 60 -reset_timestamps 1 -segment_atclocktime 1 -strftime 1 outputFile%Y-%m-%d_%H-%M-%S.mp4
It basically creates 1 minute long files from the stream, but the problem is that the framerate of every segmented file is NEVER 15fps.
The values that I get are something like this.
14.99874
15.00031
This is a huge problem for me because I need to merge these files with other 15fps videos and the result is not good. The merged file is unstable, the image crashes and sometimes even VLC crashes if I randomly click on the time bar.
If I just merge the stream files all is well, when I try it to mix it with something else, there is nothing I can do to have a video file that is watchable and stable.
Is this normal? What can I do to have segments with a fixed 15fps without re-encoding?
Thanks in advance.
As Mulvya pointed out, ffmpeg truncates the last frame.
There are two ways to solve this:
1) Save the files to another container other than mp4, it can be TS
2) Remove the last frame of the video also works but you have to use a filter which means re-encoding which can be long and heavy on the cpu/ram
Related
I want to extract specific frames from a video and save them on an external disk with ffmpeg with the timestamp of the video as the output name, preferably with milliseconds so that I can acquire more frames per second.
As a first approach I tried to extract the frames to the external disk with the following code, where I intend to extract 1 frame per second from a specific time interval and saving them as a sequence of images.
ffmpeg -ss 00:12:25 -to 00:12:35 -i 220718-124513_CAM0bc99448_30.mp4 -r 1 E:/images/img_%04d.png
Once I tried a different time interval, it began overwriting the images I had on the disk, because the sequence restarted, and the purpose of this is that I want to retrieve as many images as possible.
Then I tried the following code
ffmpeg -ss 00:00:00 -to 00:00:04 -i 220718-124513_CAM0bc99448_30.mp4 -vframes 1 -f image2 -strftime 1 E:/images/"img_%Y-%m-%d_%H-%M-%S.png"
thinking that it would give me the timestamp of the video but it saved the images with the local time, which solves the problem I was having, but I would like to specify the timestamp that those frames correspond on the video, preferably with milliseconds included, so that I can acquire even more frames per second (doing it this way, if I want 2 frames per second, it will save as one image only because the output names only cover seconds).
Finally I tried this code:
ffmpeg -ss 00:00:00 -to 00:00:04 -i 220718-124513_CAM0bc99448_30.mp4 -copyts -f image2 -frame_pts true -r 2 E:/images/img_%04d.png
and it supposedly solves the issue, and if there is no solution for the problem I mentioned, this will be the way I will implement this.
This is the first time I am posting on stack overflow, so if there is something missing on my question, please tell me and I will change.
Thanks in advance!
I am making a datamoshing program in C++, and I need to find a way to remove one frame from a video (specifically, the p-frame right after a sequence jump) without re-encoding the video. I am currently using h.264 but would like to be able to do this with VP9 and AV1 as well.
I have one way of going about it, but it doesn't work for one frustrating reason (mentioned later). I can turn the original video into two intermediate videos - one with just the i-frame before the sequence jump, and one with the p-frame that was two frames later. I then create a concat.txt file with the following contents:
file video.mkv
file video1.mkv
And run ffmpeg -y -f concat -i concat.txt -c copy output.mp4. This produces the expected output, although is of course not as efficient as I would like since it requires creating intermediate files and reading the .txt file from disk (performance is very important in this project).
But worse yet, I couldn't generate the intermediate videos with ffmpeg, I had to use avidemux. I tried all sorts of variations on ffmpeg -y -ss 00:00:00 -i video.mp4 -t 0.04 -codec copy video.mkv, but that command seems to really bug out with videos of length 1-2 frames - while it works for longer videos no problem. My best guess is that there is some internal checker to ensure the output video is not corrupt (which, unfortunately, is exactly what I want it to be!).
Maybe there's a way to do it this way that gets around that problem, or better yet, a more elegant solution to the problem in the first place.
Thanks!
If you know the PTS or data offset or packet index of the target frame, then you can use the noise bitstream filter. This is codec-agnostic.
ffmpeg -copyts -i input -c copy -enc_time_base -1 -bsf:v:0 noise=drop=eq(pos\,11291) out
This will drop the packet from the first video stream stored at offset 11291 in the input file. See other available variables at http://www.ffmpeg.org/ffmpeg-bitstream-filters.html#noise
Alright, real simple here. I'm rendering some fractal flames I've created over the years. Which makes the math on all of this really simple.. lol.
I'm trying to generate a 5 second video at 60fps that when played continuously makes a perfect loop.
So I sequence and render exactly 300 frames numbered 000.png through 299.png for one loop. I then send this into FFMpeg with the following command:
ffmpeg -f image2 -framerate 60 -start_number 0 -i '%03d.png' -r 60
-crf 10 output.webm
No matter what, it kills the last 12-18 frames depending on the run and creates a video that players recognize as 4 seconds only.
Here is a snippet of the processing output (Take note that 300 frames at 60fps no matter what you do comes out at 04.66 seconds - but it does claim there are exactly 5 seconds on the input side)
I have tried replacing -crf setting with just -quality good, I have tried moving around where I state the framerate. I have tried removing the -r from the output and putting it in there. I have tried building out this call to be as specific as possible such as the strictly specifying the encoder and options. Oh I have tried other encoders and get the same result. I have even tried -hwaccell using NVEC and CUVID respectively.
Nothing I do works..
Any thoughts here? Maybe alternatives to FFMpeg? Maybe difference versions of FFMpeg? I don't know what I should do next and thought I would ask.
Diagnostic output on a finished file for reference this one actually got close with 294 frames and a 4.9 second runtime it is much higher res though:
i'm quite new to ffmpeg and i've been learning how to make a 5 hour videos which loops for 5 hours and also add a intro video in the start of the video and at a outro/endscreen video at the end. I would like to know how i could do it more efficiently since i feel it's really taking a long time and too much work. My process and command's are as follows:
video specs: 3840x2160 , 60fps , 18M bitrate, hevc265
First i create a video of intro and outro/endscreen and save them as .mp4 format(both are 10secs long) using adobe premier.
i create a audio using adobe audition for the x amount of duration i want it to play, in this case 5 hours
I create the main video(which is 10 secs long) which i want to make it loop for 5 hours or any amount of time i want(this takes alot of time and is a huge file size and also adding name of the videos in the loop.txt etc video1.mp4, video2.mp4 till i reach to 5 hours), i use the command:
ffmpeg -f concat -i loop.txt -c copy main5hours.mp4
Than i concat the intro,main,outro/endscreen videos together using:
ffmpeg -f concat -i files.txt -c copy videowithoutaudio.mp4
Than merge video and audio using:
ffmpeg -i videowithoutaudio.mp4 -i audio.flac -c:v copy -c:a copy finalvideo.mp4
So this process takes a lot of time and file size and want some advice how i could do this more efficiently saving myself both time and file size and less work.
I read that i could also overlay the intro and endscreen videos but not sure how to do them and will it make any difference?
p.s: i'll be using the same intro and endscreen videos for all other videos i'll be making. Thanks in advance for any advice and help.
I'm encoding videos by scenes. At this moment I got two solutions in order to do so. The first one is using a Python application which gives me a list of frames that represent scenes. Like this:
285
378
553
1145
...
The first scene begins from the frame 1 to 285, the second from 285 to 378 and so on. So, I made a bash script which encodes all this scenes. Basically what it does is to take the current and previous frames, then convert them to time and finally run the ffmpeg command:
begin=$(awk 'BEGIN{ print "'$previous'"/"'24'" }')
end=$(awk 'BEGIN{ print "'$current'"/"'24'" }')
time=$(awk 'BEGIN{ print "'$end'"-"'$begin'" }')
ffmpeg -i $video -r 24 -c:v libx265 -f mp4 -c:a aac -strict experimental -b:v 1.5M -ss $begin -t $time "output$count.mp4" -nostdin
This works perfect. The second method is using ffmpeg itself. I run this commands and gives me a list of times. Like this:
15.75
23.0417
56.0833
71.2917
...
Again I made a bash script that encodes all these times. In this case I don't have to convert to times because what I got are times:
time=$(awk 'BEGIN{ print "'$current'"-"'$previous'" }')
ffmpeg -i $video -r 24 -c:v libx265 -f mp4 -c:a aac -strict experimental -b:v 1.5M -ss $previous -t $time "output$count.mp4" -nostdin
After all this explained it comes the problem. Once all the scenes are encoded I need to concat them and for that what I do is to create a list with the video names and then run the ffmpeg command.
list.txt
file 'output1.mp4'
file 'output2.mp4'
file 'output3.mp4'
file 'output4.mp4'
command:
ffmpeg -f concat -i list.txt -c copy big_buck_bunny.mp4
The problem is that the "concated" video is longer than the original by 2.11 seconds. The original one lasts 596.45 seconds and the encoded lasts 598.56. I added up every video duration and I got 598.56. So, I think the problem is in the encoding process. Both videos have the same frames number. My goal is to get metrics about the encoding process, when I run VQMT to get the PSNR and SSIM I get weird results, I think is for this problem.
By the way, I'm using the big_buck_bunny video.
The probable difference is due to the copy codec. In the latter case, you tell ffmpeg to copy the segments, but it can't do that based on your input times.
It has to find first the previous I frames (a frame that can be decoded without any reference to any previous frame) and starts from here.
To get what you need, you need to either re-encode the video (like you did in the 2 former examples) or change the times to stop at I frames.
To assert I getting your issue correctly:
You have a source video (that's encoded at variable frame rate, close to 18fps)
You want to split the source video via ffmpeg, by forcing the frame rate to 24 fps.
Then you want to concat each segment.
I think the issue is mainly that you have some discrepancy in the timing (if I divide the frame index by the time you've given, I getting between 16fps to 18fps). When you are converting them in step 2, the output video segment time will be 24fps. ffmpeg does not resample in the time axis, so if you force a video rate, the video will accelerate or slow down.
There is also the issue of consistency for the stream:
Typically, a video stream must start with a I frame, so when splitting, FFMPEG has to locate the previous I frame (when using copy codec, and this changes the duration of the segment).
When you are concatenating, you could also have the issue of consistency (that is, if the segment you are concatenating does end with a I frame, and the next one starts with a I frame, it's possible FFMPEG drops either one, although I don't remember what is the current behavior now)
So, to solve your issue, if I were you, I would avoid step 2 (it's bad for quality anyway). That is, I would use ffmpeg to split the segments of interest based on the frame number (that's the only value that's not approximate in your scheme) in png or ppm frames (or to a pipe if you don't care about keeping them) and then concat all the frames by encoding them at the last step with the expected rate set to totalVideoTime / totalFrameCount.
You'll get a smaller and higher quality final video.
If you can't do what I said for whatever reason, at least for the concat input, you should use the ffconcat format:
ffconcat version 1.0
file segment1
duration 12.2
file segment2
duration 10.3
This will give you the expected duration by cutting each segment if it's longer
For selecting by frame number (instead of time as time is hard to get right on variable frame rate video), you should use the select filter like this:
-vf select=“between(n\,start_frame_num\,end_frame_num),setpts=STARTPTS"
I suggest checking the input and output frame rate and make sure they match. That could be a source of the discrepancy.