I take a dx11 frame, convert it to a cv mat and put it on a ecal message buffer. Profiling shows memory balloning at CreateTexture2D.
I've tried releasing the texture, the device the context. Tried flushing the context. I put everything in a method and I thought that's suppose to allow it to deconstruct. I've tried com ptrs, and maybe I'm doing it wrong, but it has the same effect.
There's noise in this snippet, just because I've been trying everything. This whole dx11 space sucks. What am I supposed to do to get these things to actually release? Is there a way to brute force them to free the memory?
flatbuffers::FlatBufferBuilder builder(1024);
D3D11_TEXTURE2D_DESC desc;
surfaceTexture->GetDesc(&desc);
D3D11_BOX my_box;
cv::Mat mat;
ID3D11Texture2D* myText;
my_box.front = 0;
my_box.back = 1;
my_box.left = 1600;
my_box.top = 480;
my_box.right = 2240;
my_box.bottom = -160;
desc.Width = 640;
desc.Height = 640;
desc.ArraySize = 1;
desc.BindFlags = 0;
desc.MiscFlags = 0;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.MipLevels = 1;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
desc.Usage = D3D11_USAGE_STAGING;
auto d3dDevice2 = GetDXGIInterfaceFromObject<ID3D11Device>(m_device2);
d3dDevice2->GetImmediateContext(m_d3dContext2.put());
d3dDevice2->CreateTexture2D(&desc, NULL, &myText);
m_d3dContext2->CopySubresourceRegion(myText, D3D11CalcSubresource(0, 0, 1), 0, 0, 0, surfaceTexture.get(), 0, &my_box);
/*cv::directx::convertFromD3D11Texture2D(myText, mat);*/
d3dDevice2->Release();
m_d3dContext2->Flush();
m_d3dContext1->Flush();
m_d3dContext2->Release();
m_d3dContext2.get()->Flush();
m_d3dContext1.get()->Flush();
m_d3dContext2.get()->Release();
cv::cvtColor(mat, mat, cv::COLOR_RGBA2RGB);
auto byte_buffer = builder.CreateVector(mat.data, sizeof(mat.data));
//
//
auto mloc = Dx11::Frame::CreateFrame(builder, byte_buffer);
builder.Finish(mloc);
pub.Send(builder, -1);
Related
I am trying to display a 24-bit uncompressed bitmap with an odd width using standard Win32 API calls, but it seems like I have a stride problem.
According to the msdn:
https://msdn.microsoft.com/en-us/library/windows/desktop/dd318229%28v=vs.85%29.aspx
"For uncompressed RGB formats, the minimum stride is always the image width in bytes, rounded up to the nearest DWORD. You can use the following formula to calculate the stride:
stride = ((((biWidth * biBitCount) + 31) & ~31) >> 3)"
but this simply does not work for me and below is is the code:
void Init()
{
pImage = ReadBMP("data\\bird.bmp");
size_t imgSize = pImage->width * pImage->height * 3;
BITMAPINFOHEADER bmih;
bmih.biSize = sizeof(BITMAPINFOHEADER);
bmih.biBitCount = 24;
// This is probably where the bug is
LONG stride = ((((pImage->width * bmih.biBitCount) + 31) & ~31) >> 3);
//bmih.biWidth = pImage->width;
bmih.biWidth = stride;
bmih.biHeight = -((LONG)pImage->height);
bmih.biPlanes = 1;
bmih.biCompression = BI_RGB;
bmih.biSizeImage = 0;
bmih.biXPelsPerMeter = 1;
bmih.biYPelsPerMeter = 1;
bmih.biClrUsed = 0;
bmih.biClrImportant = 0;
BITMAPINFO dbmi;
ZeroMemory(&dbmi, sizeof(dbmi));
dbmi.bmiHeader = bmih;
dbmi.bmiColors->rgbBlue = 0;
dbmi.bmiColors->rgbGreen = 0;
dbmi.bmiColors->rgbRed = 0;
dbmi.bmiColors->rgbReserved = 0;
HDC hdc = ::GetDC(NULL);
mTestBMP = CreateDIBitmap(hdc,
&bmih,
CBM_INIT,
pImage->pSrc,
&dbmi,
DIB_RGB_COLORS);
hdc = ::GetDC(NULL);
}
and here the drawing fuction
RawBMP *pImage;
HBITMAP mTestBMP;
void UpdateScreen(HDC srcHDC)
{
if (pImage != nullptr && mTestBMP != 0x00)
{
HDC hdc = CreateCompatibleDC(srcHDC);
SelectObject(hdc, mTestBMP);
BitBlt(srcHDC,
0, // x
0, // y
// I tried passing the stride here and it did not work either
pImage->width, // width of the image
pImage->height, // height
hdc,
0, // x and
0, // y of upper left corner
SRCCOPY);
DeleteDC(hdc);
}
}
If I pass the original image width (odd number) instead of the stride
LONG stride = ((((pImage->width * bmih.biBitCount) + 31) & ~31) >> 3);
//bmih.biWidth = stride;
bmih.biWidth = pImage->width;
the picture looks skewed, below shows the differences:
and if I pass the stride according to msdn, then nothing shows up because the stride is too large.
any clues? Thank you!
thanks Jonathan for the solution. I need to copy row by row with the proper padding for odd width images. More or less the code for 24-bit uncompressed images:
const uint32_t bitCount = 24;
LONG strideInBytes;
// if the width is odd, then we need to add padding
if (width & 0x1)
{
strideInBytes = ((((width * bitCount) + 31) & ~31) >> 3);
}
else
{
strideInBytes = width * 3;
}
// allocate the new buffer
unsigned char *pBuffer = new unsigned char[strideInBytes * height];
memset(pBuffer, 0xaa, strideInBytes * height);
// Copy row by row
for (uint32_t yy = 0; yy < height; yy++)
{
uint32_t rowSizeInBytes = width * 3;
unsigned char *pDest = &pBuffer[yy * strideInBytes];
unsigned char *pSrc = &pData[yy * rowSizeInBytes];
memcpy(pDest, pSrc, rowSizeInBytes);
}
rawBMP->pSrc = pBuffer;
rawBMP->width = width;
rawBMP->height = height;
rawBMP->stride = strideInBytes;
How can i create texture mipmaps in DirectX? This is my code, in which i tried do this, but it doesn't work:
D3D11_TEXTURE2D_DESC desc{};
desc.Width = dims.X;!
desc.Height = dims.Y;
desc.ArraySize = 1;
desc.SampleDesc.Count = 1;
desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
desc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS;
D3D11_SUBRESOURCE_DATA initData{};
initData.pSysMem = pixels;
initData.SysMemPitch = sizeof(unsigned char) * dims.X * 4;
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc{};
srvDesc.Format = desc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MipLevels = 1;
Device->CreateTexture2D(&desc, nullptr, Texture.GetAddressOf());
Device->CreateShaderResourceView(Texture.Get(), &srvDesc, ShaderResource.GetAddressOf());
DeviceContext->UpdateSubresource(Texture.Get(), 0, 0, pixels, initData.SysMemPitch, 0);
DeviceContext->GenerateMips(ShaderResource.Get());
Texture now looks like this
Ok, i changed srvDesc.Texture2D.MipLevels to -1 and now it works. Thanks.
I have currently the problem that a library creates a DX11 texture with BGRA pixel format.
But the displaying library can only display RGBA correctly. (This means the colors are swapped in the rendered image)
After looking around I found a simple for-loop to solve the problem, but the performance is not very good and scales bad with higher resolutions. I'm new to DirectX and maybe I just missed a simple function to do the converting.
// Get the image data
unsigned char* pDest = view->image->getPixels();
// Prepare source texture
ID3D11Texture2D* pTexture = static_cast<ID3D11Texture2D*>( tex );
// Get context
ID3D11DeviceContext* pContext = NULL;
dxDevice11->GetImmediateContext(&pContext);
// Copy data, fast operation
pContext->CopySubresourceRegion(texStaging, 0, 0, 0, 0, tex, 0, nullptr);
// Create mapping
D3D11_MAPPED_SUBRESOURCE mapped;
HRESULT hr = pContext->Map( texStaging, 0, D3D11_MAP_READ, 0, &mapped );
if ( FAILED( hr ) )
{
return;
}
// Calculate size
const size_t size = _width * _height * 4;
// Access pixel data
unsigned char* pSrc = static_cast<unsigned char*>( mapped.pData );
// Offsets
int offsetSrc = 0;
int offsetDst = 0;
int rowOffset = mapped.RowPitch % _width;
// Loop through it, BRGA to RGBA conversation
for (int row = 0; row < _height; ++row)
{
for (int col = 0; col < _width; ++col)
{
pDest[offsetDst] = pSrc[offsetSrc+2];
pDest[offsetDst+1] = pSrc[offsetSrc+1];
pDest[offsetDst+2] = pSrc[offsetSrc];
pDest[offsetDst+3] = pSrc[offsetSrc+3];
offsetSrc += 4;
offsetDst += 4;
}
// Adjuste offset
offsetSrc += rowOffset;
}
// Unmap texture
pContext->Unmap( texStaging, 0 );
Solution:
Texture2D txDiffuse : register(t0);
SamplerState texSampler : register(s0);
struct VSScreenQuadOutput
{
float4 Position : SV_POSITION;
float2 TexCoords0 : TEXCOORD0;
};
float4 PSMain(VSScreenQuadOutput input) : SV_Target
{
return txDiffuse.Sample(texSampler, input.TexCoords0).rgba;
}
Obviously iterating over a texture on you CPU is not the most effective way. If you know that colors in a texture are always swapped like that and you don't want to modify the texture itself in your C++ code, the most straightforward way would be to do it in the pixel shader. When you sample the texture, simply swap colors there. You won't even notice any performance drop.
I am trying to create a texture3d programatically but I am not really understanding how it is done. Should each slice of the texture be a subresource? This what I am trying to do, but it is not working:
// Create texture3d
const int32 cWidth = 6;
const int32 cHeight = 7;
const int32 cDepth = 3;
D3D11_TEXTURE3D_DESC desc;
desc.Width = cWidth;
desc.Height = cHeight;
desc.MipLevels = 1;
desc.Depth = cDepth;
desc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_RENDER_TARGET;
desc.CPUAccessFlags = 0;
desc.MiscFlags = 0;
const uint32 bytesPerPixel = 4;
uint32 sliceSize = cWidth*cHeight*bytesPerPixel;
float tex3d[cWidth*cHeight*cDepth];
memset(tex3d, 0x00, sizeof(tex3d));
uint32 colorIndex = 0;
for (uint32 depthCount = 0; depthCount<depthSize; depthCount++)
{
for (uint32 ii=0; ii<cHeight; ii++)
{
for (uint32 jj=0; jj<cWidth; jj++)
{
// Add some dummy color
tex3d[colorIndex++] = 1.f;
tex3d[colorIndex++] = 0.f;
tex3d[colorIndex++] = 1.f;
tex3d[colorIndex++] = 0.f;
}
}
}
D3D11_SUBRESOURCE_DATA initData[cDepth] = {0};
uint8 *pMem = (uint8*)tex3d;
// What do I pass here? Each slice?
for (uint32 depthCount = 0; depthCount<depthSize; depthCount++)
{
initData[depthCount].pSysMem = static_cast<const void*>(pMem);
initData[depthCount].SysMemPitch = static_cast<UINT>(sliceSize); // not sure
initData[depthCount].SysMemSlicePitch = static_cast<UINT>(sliceSize); // not sure
pMem += sliceSize;
}
ID3D11Texture3D* tex = nullptr;
hr = m_d3dDevice->CreateTexture3D(&desc, &initData[0], &tex);
ID3D11RenderTargetView *pRTV = nullptr;
hr = m_d3dDevice->CreateRenderTargetView(tex, nullptr, &pRTV);
This creates the texture but when I gives me 1 sub-resource? Should it be 3?
I looked at this article, but it refers to texture2d;
D3D11: Creating a cube map from 6 images
If anyone has some snipped of a code that works, I'd like to take a look.
thx!
In Direct3D, 3D textures are laid out such that sub-resources are mipmap levels. Each mipmap level contains 1/2 as many slices as the previous, but in this case you only have 1 mipmap LOD, so you will only have 1 subresource (containing 3 slices).
As for the pitch, SysMemPitch is the number of bytes between rows in each image slice (cWidth * bytesPerPixel assuming you tightly pack this). SysMemSlicePitch is the number of bytes between 2D slices (cWidth * cHeight * bytesPerPixel). Thus, the memory for each mipmap needs to be arranged as a series of 2D images with the same dimensions.
I want to create a image out of Core OpenGL context.
I used following code but it creates a black image. So I guess I cannot use glReadPixles there? Any other suggestions please?
int myDataLength = 480 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 480; y++)
{
for(int x = 0; x < 320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef image= CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, false, renderingIntent);
//PRINT image... Its black!!!!!!
CGDataProviderRelease(provider);
free(buffer);
free(buffer2);
Before you do a glReadPixels call you must
set proper packing (see glPixelStorei reference page)
select the right buffer to read from with glReadBuffer (front after swapping, back before swapping, I recommend swap and read from front)