Compress Video While Preserving Alpha Channel - ffmpeg

I have a video with a transparent background that is very large despite being only 6 seconds. I was hoping I could compress it with FFmpeg but everything I try seems to discard the alpha channel...
This code brings the file down from 33gb to 24mb:
ffmpeg -i "C:\Users\djcim\Desktop\Intro For Now\Video Intro.avi" -map 0 -c:v libx264 -preset slow ^
-crf 17 -acodec copy "C:\Users\djcim\Desktop\Intro For Now\Compressed.avi"
But as stated I lose the alpha channel, any ideas on how I could significantly compress my file while preserving the alpha channel?

There are various codecs that support alpha viz. qtrle, png, ffv1..etc
Try those 3 to check which yields the best size. All are lossless. The first two only support RGB pixels, whereas FFV1 supports both RGB and YUV but few applications support it. PNG is the most widely compatible.
e.g.
ffmpeg -i "Video Intro.avi" -map 0 -c:v png -c:a copy "Compressed.avi"
(Suggest using MOV container)

Related

Converting images to video keeping GOP 1 using ffmpeg

I have a list of images, containing incremental integer values saved in png format starting from number 1, which need to be converted to a video with GOP 1 using ffmpeg. I have used the following command to convert the images to video and subsequently used ffplay to seek to a particular frame. The displayed frame doesn't match the frame being seek. Any help?
ffmpeg -i image%03d.png -c:v libx264 -g 1 -pix_fmt yuv420p out.mp4

ffmpeg: filter_complex issue: 25fps vs. 60fps (with transparent VP9 webm)

I have a lot of created png images with transparency / alpha channel.
These need to be converted to a webm file with the VP9 codec to have a video with transparent areas.
The output framerate should be 60fps. But filter_complex (which i need to select the yuva420p format) apparently has an issue with that.
I found an answer how to fix that; tried to adapt it to my case but it did not work.
The 1055 source png files should create a video with 60fps and a duration of 17.6 sec. But the result is a file with 42.2 sec (1055 frames at 25fps); according to ffmpeg 2530 frames were used.
So the video file itself has 60fps but the source frames are "stretched" to 25fps. Kind of a slow-mo video.
What do i need to change?
Or would it be possible to put the pngs into a VP9 webm with 60fps without using filter_complex at all?
The command i'm using by adapting another found answer is:
ffmpeg -hide_banner -i srcImgMasked_%05d.png -filter_complex "nullsrc=size=1920x1080:rate=60 [base]; [0]format=yuva420p [bla]; [base] [bla] overlay=shortest=0" -c:v libvpx-vp9 -crf 20 -b:v 0 -threads 0 result.webm -y

FFMPEG DNxHR with Alpha to Webm

I'm trying to convert a video file exported from Blackmagic Fusion as DNxHR with Alpha checked to WebM or PNG. FFMPEG seems to ignore the alpha and the background is black. Is there something I need to do? This is what I'm currently using:
ffmpeg -ss 0.5 -i DNxHR444A.mov -vframes 1 test.png
or
ffmpeg -i DNxHR444A.mov -c:v libvpx-vp9 -crf 30 -b:v 0 test.webm
I can upload my test video if that helps. But it's quite large.
FFmpeg's DNXHD/R decoder does not support alpha. Either use another codec on export, or export alpha separately (it will be a grayscale picture). With the 2nd method, ffmpeg can combine the main image and the alpha for onward processing.

Changing resolution mid-video with FFMPEG

I have a source video (mpeg2video) which I'm transcoding to x264. The source contains 2 different programs recorded from TV. One is in 4:3 AR and the other 16:9 AR. When I play the source file through VLC the player correctly changes size to show the video at the correct AR. So far so good.
When I transcode the conversion process auto detects the AR from the first few frames and then transcodes the whole video using this AR. If the 16:9 section comes first then the whole conversion is done in 16:9 and the 4:3 section looks stretch horizontally. If the 4:3 section is at the start of the source file then the whole transcode is done in 4:3 and the 16:9 section looks squashed horizontally.
No black bars are ever visible.
Here's my command:
nice -n 17 ffmpeg -i source.mpg -acodec libfaac -ar 48000 -ab 192k -async 1 -copyts -vcodec libx264 -b 1250k -threads 2 -level 31 -map 0:0 -map 0:1 -map 0:2 -scodec copy -deinterlace output.mkv
I don't fully understand what's going on. How do I get the same 'change in AR' mid video in the output file that I have in the input video?
I don't think ffmpeg is designed to do that midway. You will have to write your own application using libav for it. The simpler way would be create two chunks of video that you combine.
EDIT:
The best way to deal with it is if you can detect the change of AR yourself and transcode the two segments seperately and join them.
EDIT2:
Use ffmpeg itself to chunk the video, demux anything you want and mux it back again. It should work fine. You needn't use avidemux.

Create MP4 video using FFMPEG and JPEG2000 frames

I'm trying to create an MP4 video with ffmpeg using JPEG2000 images as frames.
It works when the JPEG2000 is 8bpp, but I need it to work for at least 12 bits (ideally 12, but could be 16). The images are grayscale.
This is the command I'm using:
ffmpeg.exe -i imagen.jp2 video1.mp4
If I try to use -pix_fmt it says it's not supported by the encoder (it doesn't matter which format I use).
Some sample images can be found here:
http://ioingresodemanda.com/jp2.rar
I could also use any other tool, it doesn't need to be ffmpeg.
UPDATE: Adding ffmpeg output - http://pastebin.com/NyY3vgpz
Thanks in advance
If you are ok with mp4 file having a different video format the following will work
ffmpeg -strict -2 -i 12bit.jp2 -vcodec libx264 -an out.mp4
ffmpeg -strict -2 -i 12bit.jp2 -vcodec mpeg4 -an out.mp4
ffmpeg doesn't support 12-bit color. Most of the H264 profiles only support 8-bit color; a few support 10-bit, and only the super-obscure lossless Hi444PP profile supports 14-bit color. The x264 encoder does support some of the profiles with 10-bit color, but that's as far as it goes, and you have to explicitly enable it using the --bit-depth option:
http://git.videolan.org/?p=x264.git;a=commit;h=d058f37d9af8fc425fa0626695a190eb3aa032af
As noted in the commit, you may also want to keep in mind that "very few H.264 decoders support >8 bit depth currently".

Resources