8bpp scaling with Direct2d - direct2d

I want to scale a 1920x1080 image to 1280x720. The source and destination buffers are both in main memory. Each pixel is 8 bits. The images are the Y component of a video frame.
For best performance, I want to avoid buffer copy. IWICBitmapScaler can do the scaling but is very slow. I'm trying Direct2D, but I can't get DrawBitmap to work for an A8 Render Target. I get hr = 0x88982f80 : The bitmap pixel format is unsupported. How can I use DrawBitmap with an A8 render target?

Related

How to improve the quality of images in movie saved via videowriter in matlab?

I am making a movie in MATLAB with 3 subplot. When I am saving a figure as png it looks great but saving in video mode looks ugly. How to improve the images of video. Specially to remove extra white space regions. Also is there any possibility to see 360 rotation in elevation angle. I am using Linux machine.
Here is the image in png
Same image saved in video mode
MATLAB Code
v = VideoWriter('myFile.avi');
v.FrameRate = 1;
open(v);
h1=figure;
set(gcf, 'PaperSize', [15 5]);
set(gcf, 'PaperPosition', [0 0 15 5]);
d=linspace(0,360,15); % azimuth angle
first=length(d);
for j=1:first
viewScene(j,:)=[-d(j),1];
end
d=linspace(-90,90,10); % elevation angle
for j=1:length(d)
viewScene(j+first,:)=[-38,d(j)];
end
final=first+length(d);
for t=1:final
for i=1:3
subplot(1,3,i)
plot3()
hold on
plot3()
axis vis3d;
view(viewScene(t,:))
set(findobj(gcf,'type','axes'),'visible','off');
end
frame=getframe(gcf);
writeVideo(v,frame)
hold off
end
close(v);
By default, VideoWriter creates a Motion JPEG-AVI file. You can modify the Quality parameter to get better quality images. Quality = 100 gives the best results.
Another option is to write Uncompressed AVI's which will write the data as is without any corruption but can result in larger files. You can also try and convert your image into an indexed image using RGB2IND function and write an Indexed AVI which is also lossless and will give roughly 1/3rd the size of Uncompressed AVIs.
You can also give MPEG-4 a shot.
The thing is that MJPEG and MPEG-4, they are used for compressing natural scenic images where high-frequency information i.e. edges and transitions is lost. Your image is not a natural scenery and hence this is expected.

FFMPEG Frame to DirectX Surface Hardware Accelerated

I use ffmpeg functions to decode h264 frames and display in a window on windows platform. The approach which I use is as below (from FFMPEG Frame to DirectX Surface):
AVFrame *frame;
avcodec_decode_video(_ffcontext, frame, etc...);
lockYourSurface();
uint8_t *buf = getPointerToYourSurfacePixels();
// Create an AVPicture structure which contains a pointer to the RGB surface.
AVPicture pict;
memset(&pict, 0, sizeof(pict));
avpicture_fill(&pict, buf, PIX_FMT_RGB32,
_ffcontext->width, _ffcontext->height);
// Convert the image into RGB and copy to the surface.
img_convert(&pict, PIX_FMT_RGB32, (AVPicture *)frame,
_context->pix_fmt, _context->width, _context->height);
unlockYourSurface();
In the code, I use sws_scale instead of img_convert.
When I pass the surface data pointer to sws_scale (in fact in avpicture_fill), it seems that the data pointer is actually on RAM not on GPU memory, and when I want to display the surface, it seems that the data is moved to GPU and then displayed. As I know CPU utilization is high when data is copied between RAM and GPU memory.
How I can tel ffmpeg to render directly to a surface on GPU memory (not a data pointer on RAM)?
I have found the answer to this problem. To prevent extra cpu usage in displaying frames using ffmpeg, we must not decode the frame to RGB. Almost all of the video files are decoded to YUV (this is the original image format inside the video file). The point here is that GPU is able to display YUV data directly without need to convert it to RGB. As I know, using ffmpeg usual version, decoded data is always on RAM. For a frame, the amount of YUV data is very small as compared to RGB decoded equivalent of the same frame. So when we move YUV data to GPU instead of converting to RGB and then moving to GPU, we speed up operation from two sides:
No conversion to RGB
Amount of data moved between RAM and GPU is decreased
So finally the overall CPU usage is decreased.

What is the most effective pixel format for WIC bitmap processing?

I'm trying to make a simple video player like program with Direct2D and WIC Bitmap.
It requires fast and CPU economic drawing (with stretch) of YUV pixel format frame data.
I've already tested with GDI. I hope switching to Direct2D give at least 10x performance gain (smaller CPU overhead).
What I'll doing is basically such as below:
Create an empty WIC bitmap A (for drawing canvas)
Create another WIC bitmap B with YUV frame data (format conversion)
Draw bitmap B onto A, then draw A to the D2D render target
For 1, 2 step, I must select a pixel format.
WIC Native Pixel Formats
There is a MSDN page recommends WICPixelFormat32bppPBGRA.
http://msdn.microsoft.com/en-us/library/windows/desktop/hh780393(v=vs.85).aspx
What's the difference WICPixelFormat32bppPBGRA and WICPixelFormat32bppBGRA? (former has additional P)
If WICPixelFormat32bppPBGRA is the way to go, is it always the case? Regardless hardware and/or configuration?
What is the most effective pixel format for WIC bitmap processing actually?
Unfortunately, using Direct2D 1.1 or lower, you cannot use pixelformat different from DXGI_FORMAT_B8G8R8A8_UNORM which is equivalent to WIC's WICPixelFormat32bppPBGRA ("P" is if you use D2D1_ALPHA_MODE_PREMULTIPLIED alpha mode in D2D).
If your target OS in Windows 8 then you can use Direct2D's newer features. As far as I remember there is some kind of YUV support for D2D bitmaps. (Edit: No, there is not. RGB32 remains the only pixelformat supported along with some alpha-only formats)
What is the most effective pixel format for WIC bitmap processing actually?
I'm not sure how to measure pixel format effectiveness, but if you want to use hardware acceleration you should draw using D2D instead of WIC, and use WIC only for colorspace conversion. (GDI is also hardware accelarated btw).
What's the difference WICPixelFormat32bppPBGRA and WICPixelFormat32bppBGRA? (former has
additional P)
P means that RGB components are premultiplied. (see here)
What I'll doing is basically such as below:
Create an empty WIC bitmap A (for drawing canvas)
Create another WIC bitmap B with YUV frame data (format conversion)
Draw bitmap B onto A, then draw A to the D2D render target
If you target for performance, you should minimize the bitmap copy operations, also you should avoid using WIC bitmap render target, because it is uses software rendering. If your player would only render to a window, you can use HWND render target, or DeviceContext with Swap Chain (depending of Direct2D version you use).
Instead of rendering frame B to frame A, you can use software pixel format conversion featuers of WIC (e.g. IWICFormatConverter). Another way would be to write (or find) a custom conversion routine using SIMD operations. Or use shaders to convert the format (colorspace) on the GPU side. But the latter two require advanced knowledge.
When it is converted you can lock the pixels to get the pixel data and directly copy that data to a D2D bitmap (ID2D1Bitmap::CopyFromMemory()). Given that you already have a ready d2d bitmap.
And the last step would be to render the bitmap to the render target. And you can use transformation matrices to realize stretching.

Image resizing without quality loss?

If I have for example an image of size 400 x 600. I know how to resize it in order to be of size 80 x 80 by using the code below:
original_image = imread(my_image);
original_image_gray = rgb2gray(original_image);
Image_resized = imresize(original_image_gray, [80 80]);
But I think that imresize will resize the image with some losses in the quality. So how to resize it without any loss of the quality?
Image resizing itself will lose part of the image info, i.e. quality of the image.
What you can do is to choose the resizing method that fits your purpose by setting up the corresponding parameter:
[...] = imresize(...,method)
^^^^^^
Matlab stores images as pixel array. It is impossible, to store all the information contained in a 400x600 element matrix in a 80x80 matrix, therefore quality loss is unavoidable when resizing the pixel array, which is what imresize does.
If you want to reduce the physical size of your output, you should look at the imgwrite documentation, in particular at the XResolution and YResolution parameters in the case of creating png images.
original_image = imread(my_image);
imwrite(original_image_grey,'image.png','png','ResolutionUnit','cm','XResolution',400)
The above code will create a png of the original image with a resolution of 400px/cm, resulting in an image of 1cm width. The png will still be a 400x600px Bitmap.

OpenGL ES, Changing texture format from RGBA8888 to RGBA4444 will improve fill rate?

I'm rendering a lot of big images with alpha testing, and I'm hitting the fill rate.
If I change the texture format from RGBA8888 to RGBA4444 will the fill rate improve?
EDIT: Hardware is Iphone 3GS and OpenGL version 1.1
Doing some tests I've found the following:
Loading png as RGBA8888 I get 42fps
Loading png as RGBA4444 I get 44fps
Loading pvrtc2 I get 53 fps (and I had to double the texture size because it was not squared)
It seems that changing from rgba8888 to rgba4444 does not improve framerate. But using pvrtc2 might do.
You don't specify the particular hardware you're asking about, but Apple has this to say about the PowerVR GPUs in iOS devices:
If your application cannot use
compressed textures, consider using a
lower precision pixel format. A
texture in RGB565, RGBA5551, or
RGBA4444 format uses half the memory
of a texture in RGBA8888 format. Use
RGBA8888 only when your application
needs that level of quality.
While this will improve memory usage, I'm not sure that it will have a dramatic effect on fill rate. You might see an improvement from being able to hold more of the texture in cache at once, but I'd listen to Tommy's point about per-pixel operations being the more likely bottleneck here.
Also, when it comes to texture size, you'll get much better image quality at a smaller size by using texture compression (like PVRTC for the PowerVR GPUs) than lowering the precision of the texture pixel format.

Resources