ID3D11DeviceContext::CopyResource does not seem to copy the pixels - directx-11

I am trying to copy the pixels from a source ID3D11Texture2D into a shared destination texture. My goal is to create a shared handle and export it to another application. For the copying process, I use ID3D11DeviceContext::CopyResource, but it doesn't seem to work properly. When I opened the shared handle (using ID3D11Device1::OpenSharedResource1 or OpenSharedResourceByName), the image is all black, but I got the correct dimension. Does anyone know how I can debug this? I believe both the source and destination textures were created with similar descriptions, and the only differences are the destination's CPUAccessFlags is 0, and its MiscFlags is D3D11_RESOURCE_MISC_SHARED_NTHANDLE | D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX.
Many thanks,
Nicholas

OK a colleague of mine figured out a solution. The CopyResource line should be locked before and unlocked after calling, e.g.:
ComPtr<IDXGIKeyedMutex> mutexA;
hr = dxgiResource.As(&mutexA);
// GPU Copy the original
mutexA->AcquireSync(0, ...);
dxContext->CopyResource(dxSharedTexture.Get(), dxTexture.Get());
mutexA->ReleaseSync(1);
and when retrieving the texture, do the same thing.
ComPtr<IDXGIKeyedMutex> mutexB;
hr = pTexture2D.As(&mutexB);
hr = mutexB->AcquireSync(1, ...);
// Retrieve the image
// ....
mutexB->ReleaseSync(0);
Note that mutexA and mutexB have opposite keys. You lock the first the first texture with key 0 and unlock it with 1. For the second texture, it is the other way around.
Further read: [link]

Related

Creating a HBITMAP from an existing buffer

I have an existing buffer full of (DIB) bitmap data, i.e. width x height x 4 bytes (RGBA) in size. What I want to do is draw this bitmap to the screen, but looking at the CreateBitmap... / CreateDIB... functions, they don't appear to do what I'm looking for. I don't want to copy the memory in, I want to retain access to it, so I can continue to write to it in the next frame (without incurring a penalty for copying the data). Does such a method exist, or do I have to create a new bitmap and call SetDIBits on it?
If you want simple code, then you can use a BITMAP structure and assign it's bmBits to point to your actual image data (RGBA 8-Bit).
Then you can use GDI method
HBITMAP CreateBitmapIndirect(const BITMAP *pbm);
to create HBITMAP for displaying the image to screen.
But actually I think the system still do the copying while creating HBITMAP, that's why after CreateBitmapIndirect returns, you can safely free your image data.
But at least you only have to create the buffer once, and using it repeatedly as long as the size of the image doesn't change.
I use this method to display raw video from RED's Camera.
You can't write a DIB directly to a device context - you'll have to create a bitmap and copy the pixels in. Annoying, I know!
Looks like this question has a succinct way of doing that in the accepted answer.

Can be smoothing image in a loaded SWF?

I'm trying to load an SWF into another in AS3/Haxe. The loaded SWF contains some images - but only on some Shape.graphics elements. (Like graphics.beginBitmapFill(); ...)
This images are not smoothed, and jaggy.
Can this images be smoothed anyhow during runtime?
Any hack interested! :)
Thanks in advance!
Tom
Update: Sorry, but I forget to mention, that I'm loading more AS2-SWFs (AVM1) into one AS3-SWF (AVM2) with AVM2Loader, which hack the loaded bytes, and convert the AVM1 SWFs into AVM2 - it works very well. :)
So, in these SWFs I need to find the images/bitmaps, but only found the Shapes, which graphics elements has the 'images'. If I clear this graphics, then all images are gone, so I think, the images are in some graphics.beginBitmapFill(...);, without smoothing. I want to reach them, and switch smoothing on at runtime, if possible.
(Sorry, if the first time I was not enough clear.)
Edit (Jan 23 '14): I found solution for it. It is not fast, and required Flash Player 11.6. Every MovieClip graphics properties has a new readGraphicsData function, which give all the graphics commands (Vector IGraphicsData) to draw the whole MC. And iterate in these commands, if I change every bitmapFill command smooth parameter to true, and redraw the MC, it will be smoothed, and nice.
That's it. Not fast, but working.
I found solution for it. It is not fast, and required Flash Player 11.6. Every MovieClip graphics properties has a new readGraphicsData function, which give all the graphics commands (Vector IGraphicsData) to draw the whole MC. And iterate in these commands, if I change every bitmapFill command smooth parameter to true, and redraw the MC, it will be smoothed, and nice.
That's it. Not fast, but working.
function onLoad(event):Void
{
pic.forceSmoothing = true;
}
Smoothing is a property of bitmaps that's off by default.
var image = new Bitmap(bitmapData);
image.smoothing = true;
Typically, your bitmapData will be in the loader.content.bitmapData when loading externally, but it's up to you were you've stored it.
Update:
If you want to smooth all images in a loaded SWF without any knowledge of the structure of the SWF, then you'll have to recursively dig through the hiarchy of that SWF, and depending on whether or not the object is a Bitmap, turn on smoothing.
function recursivelySmooth(obj):void {
for (var i:int = 0; obj.getChildAt(i); i++) {
var item:* = obj.getChildAt(i);
if (item is Bitmap) {
item.smoothing = true;
} else if (item.hasOwnProperty("numChildren") == true) {
recursivelySmooth(item);
}
}
}
This was written freehand, so you may have to doublecheck everything is correct, but that's the basic idea. Just call recursivelySmooth() on your swf, and it'll dig through all objects that can have child elements and smooth them.

GL_INVALID_VALUE from glFramebufferTexture2DEXT only after delete/realloc texture

I have some C# (SharpGL-esque) code which abstracts OpenGL frame buffer handling away to simple "set this texture as a 'render target'" calls. When a texture is first set as a render target, I create an FBO with matching depth buffer for that size of texture; that FBO/depth-buffer combo will then be reused for all same-sized textures.
I have a curious error as follows.
Initially the app runs and renders fine. But if I increase my window size, this can cause some code to need to resize its 'render target' texture, which it does via glDeleteTextures() and glGenTextures() (then bind, glTexImage2D, and texparams so MIN_FILTER and MAG_FILTER are both GL_NEAREST). I've observed I tend to get the same name (ID) back when doing so (as GL reuses the just-freed name).
We then hit the following code (with apologies for the slightly bastardised GL-like syntax):
void SetRenderTarget(Texture texture)
{
if (texture != null)
{
var size = (texture.Width << 16) | texture.Height;
FrameBufferInfo info;
if (!_mapSizeToFrameBufferInfo.TryGetValue(size, out info))
{
info = new FrameBufferInfo();
info.Width = texture.Width;
info.Height = texture.Height;
GL.GenFramebuffersEXT(1, _buffer);
info.FrameBuffer = _buffer[0];
GL.BindFramebufferEXT(GL.FRAMEBUFFER_EXT, info.FrameBuffer);
GL.FramebufferTexture2DEXT(GL.FRAMEBUFFER_EXT, GL.COLOR_ATTACHMENT0_EXT, GL.TEXTURE_2D, texture.InternalID, 0);
GL.GenRenderbuffersEXT(1, _buffer);
info.DepthBuffer = _buffer[0];
GL.BindRenderBufferEXT(GL.RENDERBUFFER_EXT, info.DepthBuffer);
GL.RenderbufferStorageEXT(GL.RENDERBUFFER_EXT, GL.DEPTH_COMPONENT16, texture.Width, texture.Height);
GL.BindRenderBufferEXT(GL.RENDERBUFFER_EXT, 0);
GL.FramebufferRenderbufferEXT(GL.FRAMEBUFFER_EXT, GL.DEPTH_ATTACHMENT_EXT, GL.RENDERBUFFER_EXT, info.DepthBuffer);
_mapSizeToFrameBufferInfo.Add(size, info);
}
else
{
GL.BindFramebufferEXT(GL.FRAMEBUFFER_EXT, info.FrameBuffer);
GL.FramebufferTexture2DEXT(GL.FRAMEBUFFER_EXT, GL.COLOR_ATTACHMENT0_EXT, GL.TEXTURE_2D, texture.InternalID, 0);
}
GL.CheckFrameBufferStatus(GL.FRAMEBUFFER_EXT);
}
else
{
GL.FramebufferTexture2DEXT(GL.FRAMEBUFFER_EXT, GL.COLOR_ATTACHMENT0_EXT, GL.TEXTURE_2D, 0, 0);
GL.BindFramebufferEXT(GL.FRAMEBUFFER_EXT, 0);
}
ProjectStandardOrthographic();
}
After said window resize, GL returns a GL_INVALID_VALUE error from the glFramebufferTexture2DEXT() call (identified with glGetError() and gDEBugger). If I ignore this, glCheckFrameBufferStatus() later fails with GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT. If I ignore this too, I can see the expected "framebuffer to dubious to do anything" errors if I check for them and a black screen if I don't.
I'm running on an NVidia GeForce GTX 550 Ti, Vista 64 (32 bit app), 306.97 drivers. I'm using GL 3.3 with the Core profile.
Workaround and curiosity: If when rellocating textures I glGenTextures() before glDeleteTextures() - to avoid getting the same ID back - the problem goes away. I don't want to do this as it's a stupid kluge and increases my chances of out of memory errors. I'm theorising it's because GL was/is using the texture in a recent FBO and now has decided that texture ID is in use or is no longer valid in some way and so isn't acceptable? Maybe?
After the problem gDEBugger shows that both FBOs (the original one with the smaller depth buffer and previous texture, and the new one with the larger combination) have the same texture ID attached.
I've tried detaching the texture from the frame buffer (via glFramebufferTexture2DEXT again) before deallocation, but to no avail (gDEBuffer reflects the change but the problem still occurs). I've tried taking out the depth buffer entirely. I've tried checking the texture sizes via glGetTexLevelParameter() before I use it; it does indeed exist.
This sounds like a bug in NVIDIA's OpenGL implementation. Once you delete an object name, that object name becomes invalid, and thus should be a legitimate candidate for glGen* to return.
You should file a bug report, with a minimal case that reproduces the issue.
I don't want to do this as it's a stupid kluge and increases my chances of out of memory errors.
No, it doesn't. glGenTextures doesn't allocate storage for textures (which is where any real OOM errors might come from). It only creates the texture name. It's unfortunate that you have to use a workaround, but it's not any real concern.

FreeImage portable float map (PFM) RGB channel order

I'm currently using FreeImage to load PFMs into a program that otherwise uses IplImages (the old data type for OpenCV). Here's a sample of what I'm doing (ignore the part about img being an array of Mats, that's related to some other code).
FIBITMAP *src;
// Load a PFM file using freeimage
src = FreeImage_Load(FIF_PFM, "test0.pfm", 0);
Mat* img;
img = new Mat[3];
// Create a copy of the image in an OpenCV matrix (using .clone() copies the data)
img[1] = Mat(FreeImage_GetHeight(src), FreeImage_GetWidth(src), CV_32FC3, FreeImage_GetScanLine(src, 0)).clone();
// Flip the image verticall because OpenCV row ordering is reverse of FreeImage
flip(img[1], img[1], 0);
// Save a copy
imwrite("OpenCV_converted_image.jpg", img[1]);
What's strange is that if I use FreeImage to load JPEGs instead by changing FIF_PFM to FIF_JPEG and CV_32FC3 to CV_8U, this works fine, i.e. the copied picture comes out unchanged. This makes me think that OpenCV and FreeImage generally agree on the ordering of RGB channels, and that the problem is related to PFMs specifically and their being a non-standardized format.
The PFMs I'm loading were written with this code (under "Local Histogram Equalization"), which appears to write them in RGB order although I could be wrong about that. It just takes the data from a MATLAB 3D matrix of doubles and dumps it into a file using fwrite. Also, if I modify that code to write PPMs instead, then view them in IrfanView, they look correct.
So, that leaves me thinking FreeImage is taking the file data to be BGR ordered on disk already which it is not, and should not be.
Any thoughts? Is there an error in FreeImage's reading of PFMs, or is there something more subtle going on here? Thanks.
Well, I never really got this one sorted out; long story short, FreeImage and OpenCV agree on color channel order (BGR) when loading most image formats, but not when loading PFMs. I can only assume that the makers of FreeImage have therefore misinterpreted the admittedly not very solidified specs for PFMs. Since I was only using FreeImage to read/write PFMs, and it was proving quite complicated to get data back into a FreeImage structure after processing with OpenCV functions, I wrote my own PFM read/write code which turned out to be very simple.

Win32 CreatePatternBrush

MSDN displays the following for CreatePatternBrush:
You can delete a pattern brush without
affecting the associated bitmap by
using the DeleteObject function.
Therefore, you can then use this
bitmap to create any number of pattern
brushes.
My question is the opposite. If the HBRUSH is long lived, can I delete the HBITMAP right after I create the brush? IE: does the HBRUSH store its own copy of the HBITMAP?
In this case, I'd like the HBRUSH to have object scope while the HBITMAP would have method scope (the method that creates the HBRUSH).
The HBRUSH and HBITMAP are entirely independent. The handles can be deleted entirely independent from each other, and, once created, no changes to either object will effect the other.
The brush does have its own copy of the bitmap. This is easily see by deleting the bitmap after creating the brush and then using the brush (works fine)
Using GetObject to fill a LOGBRUSH structure will return the original BITMAP handle in member lbhatch, though, and not the copy's handle, unfortunately. And using GetObject on the returned bitmap handle fails if the bitmap is deleted.
Anyone any idea how to get the original bitmap dimensions from the brush in this case? I wish to create a copy of the pattern brush even though the original bitmap is deleted. I can get a copy of the original bitmap simply by painting with the brush, but I don't know it's size. I tried using SetbrushorgEx (hdc, -1,-1), hoping the -1's would be reduced modulo its dimensions when brush selected into device context and get values when I retrieve with GetBrushOrgEx. Doesn't work.
I think the bitmap must outlive the brush: the brush just references the existing bitmap rather than copying it.
You could always try it and see what happened.
I doubt that the CreatePatternBrush() API copies the bitmap you give it, since an HBITMAP is:
a GDI handle, the maximum number of which is limited, and
potentially quite large.
Win32 and GDI tend to be conservative about creating internal copies of your data, if only because when most of their APIs were created (CreatePatternBrush() dates to Windows 95, and many functions are older still), memory and GDI handles were in much more limited supply than they are now. (For example, Windows 95 was required to run well on a system with only 4MB of RAM.)

Resources