I am using the following commandline under windows to convert a video file to individual frames for use of a project. But the project will eventually use a 16bit RGB565 palette. Is it possible to use palettegen to create a 256 colour rgb565 palette instead of rgb888? I want to cut down the colour depth before reducing the images to 256 colours in the hopes of a slightly better fit for the palette.
ffmpeg -y -i "input.mpg" -filter_complex "fps=15,scale=220:-1:flags=bilinear:sws_dither=none[x];[x]split[x1][x2];[x1]palettegen=reserve_transparent=off:stats_mode=single:max_colors=256[p];[x2][p]paletteuse=new=1:dither=none" frames/%%03d.bmp
Thanks.
paletteuse outputs pal8 which is paletteized 8-bits (see ffmpeg -pix_fmts).
The bmp encoder supports these pixel formats: bgra bgr24 rgb565le rgb555le rgb444le rgb8 bgr8 rgb4_byte bgr4_byte gray pal8 monob (see ffmpeg -h encoder=bmp).
By default the output encoder will automatically select the closest matching pixel format supported by the encoder. In this case the bmp encoder directly supports pal8, so pal8 will be used.
If you want to force conversion to a different pixel format use the format filter:
ffmpeg -i "input.mpg" -filter_complex "fps=15,scale=220:-1:flags=bilinear:sws_dither=none[x];[x]split[x1][x2];[x1]palettegen=reserve_transparent=off:stats_mode=single:max_colors=256[p];[x2][p]paletteuse=new=1:dither=none,format=rgb565le" frames/%03d.bmp
Related
I have a set of transparent images, where each one represents a frame of a video. I know that I can overlay them on top of another video using -i %d.png. What I want to be able to do is turn them into a transparent video ahead of time, and then later be able to overlay that transparent video onto another video. I've tried just doing -i %d.png trans.mov and then overlaying trans.mov on top of another video, but it doesn't seem like trans.mov is actually transparent.
You have to use an encoder that supports transparency/alpha channel. You can view a list of encoders with ffmpeg -h encoders and get further details with ffmpeg -h encoder=<encoder name>, such as ffmpeg -h encoder=qtrle. Then refer to the Supported pixel formats line: if has as "a" in the supported pixel format name, such as rgba, then it supports alpha. See a general list of pixel formats with ffmpeg -pix_fmts.
The simplest solution is to mux the PNG files into MOV:
ffmpeg -framerate 25 -i %d.png -c copy output.mov
I'm trying to find if its possible to specify quality when converting to animation codec.
I've used this bit of code and don't see anything about quality.
ffmpeg -h encoder=qtrle
I know this is my generic command to get to animation codec
ffmpeg -i input.mp4 -codec copy -c:v qtrle output.mov
Any advice?
The Animation encoder has no quality or bitrate options. It is a lossless encoder which implements Run Length Encoding and a fixed procedure for compression with no variables for ratecontrol.
Similar to what alphamerge / alphaextract do, but instead of having two sources I want to use three
InputVideo1, AlphaofInputVideo1, BackgroundVideo
The idea would be to overlay inputvideo1 on top of backgroundvideo using AlphaofInputVideo1 to do a more accuracte blending. Is this possible? Using intermediate steps (e.g. using alphamerge and generate intermediate rgba bitmaps) is acceptable.
Basic syntax for this operation is
ffmpeg -i input -i alpha -i bg -filter_complex "[0][1]alphamerge[ia];[2][ia]overlay" out.mp4
The frame sizes of input and alpha must be the same. So should the framerate and framecount to avoid misaligned merges.
I have a quick time video file, video stream is in motion jpeg format, I extract every frame in the file with
ffmpeg -i a.mov -vcodec copy -f image2 %d.jpg
I found that in every jpeg file, there are actually two FFD8 marker, which means there are actually two images in one single jpeg file.
Is this correct? Is the file interlaced? Anything special need to pass to codec?
Yes, motion Jpeg supports interlaced format. If the jpeg file is half of the full video size, will mean that the mov is interlaced, and you cannot use -vcodec copy to extract the frames. Try ffmpeg -deinterlace or use yadif filter.
I am generating an animated gif from an mp4 ... but due (I think) to color reduction (gif requires -pix_fmt rgb24) the result is somewhat ... blotchy? like running an image through an oil paint (or maybe "posterize") special effect filter. I think that the quality could be better, but I don't know what to tweak.
Not sure about this ... but ooking at the color palette of the resulting gif in an image editor it does not even appear to have attempted to create a color palette specific to this clip, but instead is attempting to us a generic palette ... which wastes a lot of pixmap space. That is, if I am interpreting this correctly.
Any tips on preserving the original video image instead of getting a "posterized" animated gif?
To get better looking gifs, you can use generated palettes. palettegen filter will generate a png palette to use with the paletteuse filter.
ffmpeg -i input.mkv -vf palettegen palette.png
ffmpeg -i input.mkv -i palette.png -lavfi paletteuse output.gif
You can try using -vf format=rgb8,format=rgb24.