What is the differnce betweem ID2D1Bitmap and IWicBitmap
I have raw memory data and i wanted to create a bitmap
A WIC bitmap represents an image in system memory that can be in a wide range of formats (JPEG, PNG, BMP, etc.). A D2D bitmap represents an image in GPU memory that is one of a handful of hardware-accelerated fomats.
Assuming you want to draw the bitmap to the screen using D2D, and your raw memory data is in a format compatible with D2D, you should use ID2D1RenderTarget::CreateBitmap directly. If it is not a compatible format (e.g. it is a pointer to the raw data of a .png file), you will need to load it into an IWicBitmap and then use ID2D1RenderTarget::CreateBitmapFromWicBitmap.
Related
I use ffmpeg functions to decode h264 frames and display in a window on windows platform. The approach which I use is as below (from FFMPEG Frame to DirectX Surface):
AVFrame *frame;
avcodec_decode_video(_ffcontext, frame, etc...);
lockYourSurface();
uint8_t *buf = getPointerToYourSurfacePixels();
// Create an AVPicture structure which contains a pointer to the RGB surface.
AVPicture pict;
memset(&pict, 0, sizeof(pict));
avpicture_fill(&pict, buf, PIX_FMT_RGB32,
_ffcontext->width, _ffcontext->height);
// Convert the image into RGB and copy to the surface.
img_convert(&pict, PIX_FMT_RGB32, (AVPicture *)frame,
_context->pix_fmt, _context->width, _context->height);
unlockYourSurface();
In the code, I use sws_scale instead of img_convert.
When I pass the surface data pointer to sws_scale (in fact in avpicture_fill), it seems that the data pointer is actually on RAM not on GPU memory, and when I want to display the surface, it seems that the data is moved to GPU and then displayed. As I know CPU utilization is high when data is copied between RAM and GPU memory.
How I can tel ffmpeg to render directly to a surface on GPU memory (not a data pointer on RAM)?
I have found the answer to this problem. To prevent extra cpu usage in displaying frames using ffmpeg, we must not decode the frame to RGB. Almost all of the video files are decoded to YUV (this is the original image format inside the video file). The point here is that GPU is able to display YUV data directly without need to convert it to RGB. As I know, using ffmpeg usual version, decoded data is always on RAM. For a frame, the amount of YUV data is very small as compared to RGB decoded equivalent of the same frame. So when we move YUV data to GPU instead of converting to RGB and then moving to GPU, we speed up operation from two sides:
No conversion to RGB
Amount of data moved between RAM and GPU is decreased
So finally the overall CPU usage is decreased.
I'm trying to make a simple video player like program with Direct2D and WIC Bitmap.
It requires fast and CPU economic drawing (with stretch) of YUV pixel format frame data.
I've already tested with GDI. I hope switching to Direct2D give at least 10x performance gain (smaller CPU overhead).
What I'll doing is basically such as below:
Create an empty WIC bitmap A (for drawing canvas)
Create another WIC bitmap B with YUV frame data (format conversion)
Draw bitmap B onto A, then draw A to the D2D render target
For 1, 2 step, I must select a pixel format.
WIC Native Pixel Formats
There is a MSDN page recommends WICPixelFormat32bppPBGRA.
http://msdn.microsoft.com/en-us/library/windows/desktop/hh780393(v=vs.85).aspx
What's the difference WICPixelFormat32bppPBGRA and WICPixelFormat32bppBGRA? (former has additional P)
If WICPixelFormat32bppPBGRA is the way to go, is it always the case? Regardless hardware and/or configuration?
What is the most effective pixel format for WIC bitmap processing actually?
Unfortunately, using Direct2D 1.1 or lower, you cannot use pixelformat different from DXGI_FORMAT_B8G8R8A8_UNORM which is equivalent to WIC's WICPixelFormat32bppPBGRA ("P" is if you use D2D1_ALPHA_MODE_PREMULTIPLIED alpha mode in D2D).
If your target OS in Windows 8 then you can use Direct2D's newer features. As far as I remember there is some kind of YUV support for D2D bitmaps. (Edit: No, there is not. RGB32 remains the only pixelformat supported along with some alpha-only formats)
What is the most effective pixel format for WIC bitmap processing actually?
I'm not sure how to measure pixel format effectiveness, but if you want to use hardware acceleration you should draw using D2D instead of WIC, and use WIC only for colorspace conversion. (GDI is also hardware accelarated btw).
What's the difference WICPixelFormat32bppPBGRA and WICPixelFormat32bppBGRA? (former has
additional P)
P means that RGB components are premultiplied. (see here)
What I'll doing is basically such as below:
Create an empty WIC bitmap A (for drawing canvas)
Create another WIC bitmap B with YUV frame data (format conversion)
Draw bitmap B onto A, then draw A to the D2D render target
If you target for performance, you should minimize the bitmap copy operations, also you should avoid using WIC bitmap render target, because it is uses software rendering. If your player would only render to a window, you can use HWND render target, or DeviceContext with Swap Chain (depending of Direct2D version you use).
Instead of rendering frame B to frame A, you can use software pixel format conversion featuers of WIC (e.g. IWICFormatConverter). Another way would be to write (or find) a custom conversion routine using SIMD operations. Or use shaders to convert the format (colorspace) on the GPU side. But the latter two require advanced knowledge.
When it is converted you can lock the pixels to get the pixel data and directly copy that data to a D2D bitmap (ID2D1Bitmap::CopyFromMemory()). Given that you already have a ready d2d bitmap.
And the last step would be to render the bitmap to the render target. And you can use transformation matrices to realize stretching.
I like to compress png images via tinypng service. It's saves up to 97% of png-picture size. But sometimes resulting picture looks more brighter than original. And it's bad. The question is why does my image become brighter? An how to avoid this effect?
On tinypng website they write:
Because the number of colors is reduced, 24-bit PNG files can be converted to much smaller 8-bit indexed color images. All unnecessary metadata is stripped too.
Because tinypng uses Lossy compression it can alter image quality including brightness, if you want there to be no effect on image quality you should look at using lossless compression which only strips out unnecessary metadata and won't affect image quality, you could try using:
https://kraken.io/web-interface/
http://www.punypng.com
The recompressed image is brighter because tinypng removes ancillary chunks. I verified that fact by sending it a PNG containing a "gAMA 1.0" chunk.
If the input image has a gAMA chunk, tinypng removes it and the image is displayed as though it were sRGB (gamma=1/2.2).
If the input image has no colorspace chunks (gAMA, sRGB, cHRM, or iCCP), or if it has those but they contain a colorspace that is exactly sRGB or close to sRGB, removing them is pretty safe and won't change the image brightness.
You can avoid the effect by using another application that doesn't remove ancillary chunks, or you can convert your image to the sRGB colorspace before sending them to tinypng.
Or, you could use a PNG editor to restore the gAMA chunk. There are many PNG editors available. Personally, I'd use pngsplit to extract the gAMA chunk from the original and to separate the chunks in the tiny PNG, then "cat" the chunks from the compressed file together with the old gAMA chunk (put it right after the IHDR chunk) to form a new compressed file with the right gAMA.
I'm building one part of H264 encoder. For testing system, I need to created input image for encoding. We have a programme for read image to RAM file format to use.
My question is how to create a RAW file: bitmap or tiff (I don't want to use compressed format link JPEG)? I googled and recognize alot of raw file type. So what type i should use and how to create? . I think i will use C/C++ or Matlab to create raw file.
P/S: my need format is : YUV ( or Y Cb Cr) 4:2:0 and 8 bit colour deepth
The easiest raw format is just a stream of numbers, representing the pixels. Each raw format can be associated with metadata such as:
width, heigth
width / image row (ie. gstreamer & x-window align each row to dword boundaries)
bits per pixel
byte format / endianness (if 16 bits per pixel or more)
number of image channels
color system HSV, RGB, Bayer, YUV
order of channels, e.g. RGBA, ABGR, GBR
planar vs. packed (or FOURCC code)
or this metadata can be just an internal specification...
I believe one of the easiest approaches (after of course a steep learning curve :) is to use e.g. gstreamer, where you can use existing file/stream sources that read data from camera, file, pre-existing jpeg etc. and pass those raw streams inside a defined pipeline. One useful element is a filesink, which would simply write a single or few successive raw data frames to your filesystem. The gstreamer infrastructure has possibly hundreds of converters and filters, btw. including h264 encoder...
I would bet that if you just dump your memory, that output will conform already to some FOURCC -format (also recognized by gstreamer).
I'm trying to encode geometric data in an image file to decode in-browser using Canvas. Beyond what I learned from reading the about the GIF, PNG and BMP formats today, I don't know much about image files (or binary files in general! I grok binary math conversions, but I've never had to interrogate or write binary data without something abstracting it for me).
This Mozilla tutorial (https://developer.mozilla.org/En/HTML/Canvas/Pixel_manipulation_with_canvas) indicates that Canvas reads the image as an array of 8-bit values, every four representing RGBA.
This leads me to believe I want to encode my data as an array of 8-bit values, and put it between an image header and an image footer.
What's the simplest way to do this?