This question is similar to Remove all non key frames from video without re-encoding but this one is specific to H.264 (AVC), not H.265 (HEVC). Also, the accepted answer to that question results in an unplayable 1k file for me.
My command line currently looks like this:
ffmpeg.exe -skip_frame nokey -i input.mp4 -vf "select='eq(pict_type,PICT_TYPE_I)'" -vsync vfr output.mp4
input.mp4 is 30fps and after I run the command line above, output.mp4 is 2fps as expected, but it's composed of not just I-frames (keyframes) but also P-frames and B-frames. Because that command is CPU-intensive, it appears to be re-encoding, and I want NO re-encoding.
Also it removes the exif information and I would like to preserve it in the file. I just want to remove the non-keyframes and keep everything else.
With ffmpeg 5.0 or later, and assuming your input container marks video keyframes (MP4, MKV do), use
ffmpeg -i input.mp4 -c copy -bsf:v noise=drop=not(key) output.mp4
Related
Before posting I have searched and found similar questions on stackoverflow (I list some below) - none have helped me towards a solution, hence this post. The duration that each image is shown within the movie file differs from many posts that I have seen thus far.
A camera captures 1 image every 30 seconds. I need stream them, preferably via HLS, thus I wrap 2 images in an MP4. I then convert MP4 to mpegts. Each MP4 and TS file play fine individually (each contain two images, each image transitions after 30seconds, each movie file is 1minute long).
When I reference the two TS files in an M3U8 playlist, only the first TS file gets played. Can anyone advise why it stops and how I can get it to play all the TS files that I expect to create, not just the first TS file? Besides my ffmpeg commands, I also include my VLC log file (though I expect to stream to Firefox/Chrome clients). I am using ffmpeg 4.2.2-static installed on an AWS EC2 with AMI2 Linux.
I have four jpgs named image11.jpg, image12.jpg, image21.jpg, image22.jpg - The images look near identical as only the timestamp in top left changes.
The following command creates 1.mp4, using image11.jpg and image12.jpg, each image displayed for 30 seconds, total duration of the mp4 is 1 minute. It plays like expected.
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 1.mp4
I then convert 1.mp4 to an mpegts file, creating 1.ts. It plays like expected.
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 1.ts
I repeat the above steps except specific to image21.jpg and image22.jpg, creating 2.mp4 and 2.ts
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 2.mp4
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 2.ts
Thus now I have 1.mp4, 1.ts, 2.mp4, 2.ts and all four play individually just fine.
Using ffprobe I can confirm their duration is 60seconds, for example:
ffprobe -i 1.ts -v quiet -show_entries format=duration -hide_banner -print_format json
My m3u8 playlist follows:
#EXTM3U
#EXT-X-VERSION:4
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-MEDIA-SEQUENCE:1
#EXT-X-TARGETDURATION:60.000
#EXTINF:60.0000,
1.ts
#EXTINF:60.000,
2.ts
#EXT-X-ENDLIST
Can anyone advise where I am going wrong?
VLC Error Log (though I expect to play via web browser)
I have researched the process using these (and other pages) as a guide:
How to create a video from images with ffmpeg
convert from jpg to mp4 by ffmpeg
ffmpeg examples page
FFMPEG An Intermediate Guide/image sequence
How to use FFmpeg to convert images to video
Take a look at the start_pts/start_time in the ffprobe -show_streams output, my guess is that they all start at zero/near-zero which will cause playback to fail after your first segment.
You can still produce them independently but you will want to use something like -output_ts_offset to correctly set the timestamps for subsequent segments.
The following solution works well for me. I have tested it uninterrupted for more than two hours and believe it ticks all my boxes. (Edited because I forgot the all important -re tag)
ffmpeg will loop continuously, reading test.jpg and stream it to my RTMP server. When my camera posts an image every 30seconds, I copy the new image on top of the existing test.jpg which in effect changes what is streamed out.
Note the command below is all one line, I have put new lines in to assist reading and The order of the parameters are important - the loop and fflags genpts for example must appear before the -i parameter
ffmpeg
-re
-loop 1
-fflags +genpts
-framerate 1/30
-i test.jpg
-c:v libx264
-vf fps=25
-pix_fmt yuvj420p
-crf 30
-f fifo -attempt_recovery 1 -recovery_wait_time 1
-f flv rtmp://localhost:5555/video/test
Some arguments explained:
-re implies play in real time
loop 1 (1 turns the loop on, 0 off)
-fflags +genpts is something I only half understand. PTS I believe is the start/end time of the segment and without this flag, the PTS is reset to zero with every new image. Using this arguement means I avoid EXT-X-DISCONTINUITY when a new image is served.
-framerate 1/30 means one frame for 30seconds
-i test.jpg is my image 'placeholder'. As new images are received via a separate script, it overwrites this image. When combined with loop it means the ffmpeg output will reference the new image.
-c:v libx264 is for H264 video output formating
-vf fps=25 Removing this, or using a different value resulted in my output stream not being 30seconds.
-pix_fmt yuvj420p (sometimes I have seen yuv420p referenced but this did not work on my environment). I believe there are different jpg colour palettes and this switch ensures I can process a wider choice.
-crf 30 implies highest quality image, lowest compression (important for my client)
-f fifo -attempt_recovery 1 -recovery_wait_time 1 -f flv rtmp://localhost:5555/video/test is part of the magic to go with loop. I believe it keeps the connection open with my stream server, reduces the risk of DISCONTINUITY in the play list.
I hope this helps someone going forward.
The following links helped nudge me forward and I share as it might help others to improve upon my solution
Creating a video from a single image for a specific duration in ffmpeg
How can I loop one frame with ffmpeg? All the other frames should point to the first with no changes, maybe like a recusion
Display images on video at specific framerate with loop using FFmpeg
Loop image ffmpeg HLS
https://trac.ffmpeg.org/wiki/Slideshow
https://superuser.com/questions/1699893/generate-ts-stream-from-image-file
https://ffmpeg.org/ffmpeg-formats.html#Examples-3
https://trac.ffmpeg.org/wiki/StreamingGuide
I have a video that is interlaced and has a frame rate of 30000/1001.
If I deinterlace it using bwdif, and keeping one output frame per field, I get a fps of 60000/1001, as expected.
I want to convert this video to 50 fps keeping all frames, and thus slowing it a bit down.
If I do it with two ffmpeg commands, like the following, it works:
ffmpeg -i input.mp4 -filter_complex bwdif test.mp4
ffmpeg -i test.mp4 -filter_complex "[v1]setpts=60000/1001/50*PTS[v2];[v2]fps=50" test2.mp4
I would like to do it in one command. I tried the following command:
ffmpeg -i input.mp4 -filter_complex "[0]bwdif[v1];[v1]setpts=60000/1001/50*PTS[v2];[v2]fps=50" test.mp4
However, the result does not look as expected: some frames are duplicated and others are dropped.
Does anybody know why this is? Is there a correct way to do it in one command?
Machine learning algorithms for video processing typically work on frames (images) rather than video.
In my work, I use ffmpeg to dump a specific scene as a sequence of .png files, process them in some way (denoise, deblur, colorize, annotate, inpainting, etc), output the results into an equal number of .png files, and then update the original video with the new frames.
This works well with constant frame-rate (CFR) video. I dump the images as so (eg, 50-frame sequence starting at 1:47):
ffmpeg -i input.mp4 -vf "select='gte(t,107)*lt(selected_n,50)'" -vsync passthrough '107+%06d.png'
And then after editing the images, I replace the originals as so (for a 12.5fps CFR video):
ffmpeg -i input.mp4 -itsoffset 107 -framerate 25/2 -i '107+%06d.png' -filter_complex "[0]overlay=eof_action=pass" -vsync passthrough -c:a copy output.mp4
However, many of the videos I work with are variable frame-rate (VFR), and this has created some challenges.
A simple solution is to convert VFR video to CFR, which ffmpeg wants to do anyway, but I'm wondering if it's possible to avoid this. The reason is that CFR requires either dropping frames - since the purpose of ML video processing is usually to improve the output, I'd like to avoid this - or duplicating frames - but an upscaling algorithm that I'm working with right now uses the previous and next frame for data - if the previous or next frame is a duplicate, then ... no data for upscaling.
With -vsync passthrough, I had hoped that I could simply remove the -framerate option, and preserve the original frames as-is, but the resulting command:
ffmpeg -i input.mp4 -itsoffset 107 -i '107+%06d.png' -filter_complex "[0]overlay=eof_action=pass" -vsync passthrough -c:a copy output.mp4
uses ffmpeg's default of 25fps, and drops a lot of frames. Is there a reliable way to replace frames in VFR video?
Yes, it can be done, but it's complicated. It is crucial that the overlay video have exactly the same frame timestamps as the underlay video for this process to work reliably. Generating such a VFR video segment overlay requires capturing the frame timestamps from the source video to generate a precisely timed replacement segment.
The short version of the process is to replace the above commands with the following to extract the images:
ffmpeg -i input.mp4 -vf "select='gte(t,107)*lt(selected_n,50)',showinfo" -vsync passthrough '107+%06d.png' 2>&1 | 'sed s/\r/\n/g' | showinfo2concat.py --prefix="107+" >concat.txt
This requires a script that can be downloaded here. After editing the images, update the source video with:
ffmpeg -i input.mp4 -f concat -safe 0 -i concat.txt -filter_complex"[1]settb=1/90000,setpts=9644455+PTS*25/90000[o];[0:v:0][o]overlay=eof_action=pass" -vsync passthrough -r 90000 output.mp4
Where 90000 is the timescale (inverse of timebase), and 9644455 is the PTS of the first frame to replace.
See the source for more details about what these commands actually do.
I'm trying to extract alpha channel from ProRes (mov) in greyscale to a separate mp4 file (to emulate video with transperancy on the html page later).
ffmpeg -i in.mov -hide_banner -f mp4 -vcodec libx264 -vf alphaextract,format=yuv420p out.mp4
but I don't get a filled alpha channel but only a sort of border of it. Pretty sure that original file is ok (tried with different files) and encoding it to webm showed correct transperency.
What I get from ffmpeg
How original file looks like
It's a bug; patched in git master.
Workaround for older versions is
ffmpeg -i in.mov -vf format=yuva444p16le,alphaextract,format=yuv420p -c:v libx264 out.mp4
Is it possible to CLI ffmpeg to replace a specific frame at a specified interval with another image? I know how to extract all frames from a video, and re-stitch as another video, but I am looking to avoid this process, if possible.
My goal:
Given a video file input.mp4
Given a PNG file, image.png and given its known to occur at exactly a specific timestamp within input.mp4
create out.mp4 with image.png replacing that position of input.mp4
The basic command is
ffmpeg -i video -i image \
-filter_complex \
"[1]setpts=4.40/TB[im];[0][im]overlay=eof_action=pass" -c:a copy out.mp4
where 4.40 is the timestamp of the frame to be replaced.
Note that images default to a framerate of 25fps. Thus, for timestamps that are not multiples of (1/25=)0.04s, the framerate must be specified (eg, to replace frame at timestamp 3.5035 in a 29.97fps video):
ffmpeg -i input.vid -itsoffset 3.5035 -framerate 30000/1001 -i frame.png -filter_complex "[0:v:0][1]overlay=eof_action=pass" output.vid
This technique works just as well for replacing multiple sequential frames (eg, to replace frames starting at 107s in a 12.5fps video):
ffmpeg -i input.mp4 -itsoffset 107 -framerate 25/2 -i '107+%06d.png' -filter_complex "[0:v:0][1]overlay=eof_action=pass" output.mp4
This only works for videos with constant framerates (CFR). For VFR video, I have a separate question.