Encode 256 Palletized bitmaps to h264 using libav - ffmpeg

Am converting 32 bpp bitmap to 8bpp with 256 color pallete, this image i want to encode using h264 and send it over socket. And on the other end decode the frame and display it on my UI.
From Encoder, side:
Capture 32 bpp image.
Quantize the image to 8bpp with 256 color pallete.
Sws_scale the image from PIX_FMT_GRAY8 to YUV420P.
Encode it with h264 and send it over socket.
From Decoder, side.
Receive image.
Decode image.
Sws_scale from YUV420P to PIX_FMT_GRAY8
And display it on UI along with palette information(Sent from Encoder over socket).
When the above steps are followed, i get a totally distorted image. And when i dont use the color palette i get a black and white image.
Am not not clear how to encode the 8bpp 256 palette bitmaps using h264 and decode them accordingly. Please help me regarding this.
Am working with C++ on windows platform.
Thanks in advance,
Paul.

h.264 is not (by default) a lossless codec. So you will not get the exact color out that you put in on a pixel by pixel basis. In YUV the Y plan is luminance (black and white) and the UV planes are chroma. So your UV planes are empty here. The Y plane is compressed in a lossy fashion. so you may have a pallet that looks like 0=black 1=red 2=green ... 255=white. And you put in a 2 pixel. During the compression, to remove complexity in the image to reduce file size, the 2 may become a 1. In a black and white image you will not notice the difference. But when you apply your pallet, you green pixel just turned red.
You either need to use a a lossless codec, or encode your 256 color image to a YUV color image. Then post decode re-qunatize the colors back to your desired pallet by finding the closest color or each decoded pixel.

Related

How can I overlay an angled gradient or fade over the corner of a video with ffmpeg?

I want to darken the corner of a video with a (for instance) 45-degree black gradient. I'd like to customize the angle, feathering, color and opacity. My video is 10-bit UHD HEVC and I need to output to 10-bit lossless intermediate (probably v210 codec). How can I do all of this in ffmpeg?
Here is a mockup of what I want to do:

Image with transparent background in Flutter

I have image of object with transparent background for my Flutter app.
But Flutter shows this image with transparent background, as it is.
How do i hide transparent background of the image in Flutter?
The image you are using is not a proper transparent PNG file. It's a JPEG. So, please use a proper transparent PNG file. Here are some difference between a JPEG and PNG:
Both support true color or a palette of 16 million colors, PNG also
supports 256 color and monochrome images.
JPEG uses a lossy algorithm, PNG uses the ubiquitous lossless
algorithm which we all know as ZIP.
PNG supports alpha as well as single color transparency. JPEGS are
opaque.
Compression ratio of images can be upto 50x for a JPEG but maybe at
most 4:1 in PNGs for most images with many colors

ffmpeg How does the blend filter work

I have small project which uses blend filter of FFmpeg library.
I read the examples of this document
https://ffmpeg.org/ffmpeg-filters.html#blend_002c-tblend
But I'm not clearly understand about it.
X, Y : the coordinates of the current sample
W, H : the width and height of currently filtered plane
What are the sample and filtered plane?
Is there any document about these things.
In the context of an image, a sample refers to an individual pixel. However, a pixel usually has multiple components, like RGB (red, green and blue) or YUV (luma and two chroma units). So 'sample' here refers to the individual stores of value, i.e. a Magenta RGB pixel is defined by three samples (255,0,255).
Pixels of a frame can be stored packed (R1G1B1R2G2B2..) or planar ([R1R2...RN][G1G2..GN][B1B2..BN]). The blend filter works on planar formats only.
In YUV format images, the UV is typically subsampled and so the width and height of UV planes is lower than that of the luma plane.

How to insert cursor image in YUV(NV12) buffer?

i have captured screen with Intel Capture in NV12(YUV) format. Intel capture does not capture Mouse cursor.
So i have to include cursor image in Intel captured yuv buffer.
Is it possible to edit(include) cursor image in yuv buffer ?
Guide me this.
Thanks in advance.
You can look how image is stored in NV12 (http://www.fourcc.org/yuv.php#NV12)
YUV 4:2:0 image with a plane of 8 bit Y samples followed by an interleaved U/V plane containing 8 bit 2x2 subsampled colour difference samples.
You can try this:
Convert your cursor to YUV: https://en.wikipedia.org/wiki/YUV#Conversion_to.2Ffrom_RGB
Add cursor to frame. Y plane of the cursor to Y plane of the frame, the same with UV.
Also you can convert frame to RGB and insert cursor in RGB colorspace, but it is slowly.

Grayscale, monochrome, binary image in matlab

In Matlab,
An 8-bit gray scale image has pixel values ranging from 0 to 255. The pixel depth may vary (16-bit, 32-bit, etc)
A binary image has pixel values, either 0 or 1 (logical)
My question is that, is monochrome image, a binary image or a gray scale image as per points 1 and 2. I need clarification because I want to be 100 % sure about monochrome image.
(As per 'Digital Image Processing Using Matlab' by Gonzalez, Woods, Eddins, a monochrome image is a grayscale image. (Topic 3.2, pg no. 66))
Monochrome and grayscale are mostly interchangeable. Monochrome data has only one color, but it's not always gray. For example digital x-ray data is monochrome because it has only intensity. The printout is typically grayscale but you could also use any other color.
Sepia images are also monochrome, but strictly speaking not grayscale.

Resources