Use a texture array as Direct2D surface render target - direct2d

I try to create a Direct3D 11 texture array holding multiple pages of text rendered using DirectWrite and Direct2D. Suppose layout holds the IDWriteTextLayouts for the individual pages, then I try to do the following:
{
D3D11_TEXTURE2D_DESC desc;
::ZeroMemory(&desc, sizeof(desc));
desc.ArraySize = static_cast<UINT>(layouts.size());
desc.BindFlags = D3D11_BIND_RENDER_TARGET;
desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc.Height = height;
desc.MipLevels = 1;
desc.SampleDesc.Count = 1;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.Width = width;
auto hr = this->_d3dDevice->CreateTexture2D(&desc, nullptr, &retval.Texture);
if (FAILED(hr)) {
throw std::system_error(hr, com_category());
}
}
for (auto &l : layouts) {
ATL::CComPtr<IDXGISurface> surface;
{
auto hr = retval.Texture->QueryInterface(&surface);
if (FAILED(hr)) {
// The code fails here with E_NOINTERFACE "No such interface supported."
throw std::system_error(hr, com_category());
}
}
// Go on creating the RT from 'surface'.
}
The problem is that the code fails at the designated line where I try to obtain the IDXGISurface interface from the ID3D11Texture2D if there is more than one page (desc.ArraySize > 1). I eventually found in the documentation (https://learn.microsoft.com/en-us/windows/win32/api/dxgi/nn-dxgi-idxgisurface) that this is by deisgn:
If the 2D texture [...] does not consist of an array of textures, QueryInterface succeeds and returns a pointer to the IDXGISurface interface pointer. Otherwise, QueryInterface fails and does not return the pointer to IDXGISurface.
Is there any other way to obtain the individual DXGI surfaces in the texture array to draw to them one after the other using Direct2D?

As I could not find any way to address the sub-surfaces, I now create a staging texture with one layer to render to and copy the result into the texture array using ID3D11DeviceContext::CopySubresourceRegion.

How about Texture => IDXGIResource1 => CreateSubresourceSurface ?

Related

DirectX screen capture on Windows 10/11

I'm trying to create a screen taking app that can get a raw bitmap of a desktop window from the GPU. I did it via GDI and it works properly, but using both DirectX and DXGI Duplication api, I have black screen or an error.
void dump_buffer() {
IDirect3D9* d3d = nullptr;
d3d = Direct3DCreate9(D3D_SDK_VERSION);
D3DPRESENT_PARAMETERS d3dpp;
out("1");
ZeroMemory(&d3dpp, sizeof(d3dpp));//COCK?
D3DDISPLAYMODE d3ddm;
if(FAILED(d3d->GetAdapterDisplayMode(D3DADAPTER_DEFAULT, &d3ddm)))
err("IDirect3D9");
d3dpp.Windowed = TRUE;
d3dpp.SwapEffect = D3DSWAPEFFECT_DISCARD;
d3dpp.BackBufferFormat = d3ddm.Format;
d3dpp.EnableAutoDepthStencil = FALSE;
IDirect3DDevice9* d3ddev = nullptr;
out("2");
d3d->CreateDevice(
D3DADAPTER_DEFAULT, // Используем видеокарту, установленную в Windows по умолчанию (= в качестве основной).
D3DDEVTYPE_HAL, // Используем аппаратный рендеринг (Hardware Abstraction Layer).
GetDesktopWindow(), // Дескриптор) окна
D3DCREATE_SOFTWARE_VERTEXPROCESSING, // Не использовать "фишки" T&L для лучшей совместимости с бОльшим числом видеоадаптеров.
&d3dpp, // Указатель на заранее заполненную структуру параметров отображения (D3DPRESENT_PARAMETERS).
&d3ddev // Указатель на создаваемый объект устройства Direct3D.
);
if (d3ddev == nullptr)
err("IDirect3DDevice9");
out("3");
IDirect3DSurface9 *pRenderTarget = nullptr;
IDirect3DSurface9 *pDestTarget = nullptr;
// sanity checks.
// get the render target surface.
HRESULT hr = d3ddev->GetRenderTarget(0, &pRenderTarget);
if (FAILED(hr)){
err("GetRenderTarget");
}
// get the current adapter display mode.
// hr = pDirect3D->GetAdapterDisplayMode(D3DADAPTER_DEFAULT,&d3ddisplaymode);
out("4");
D3DDISPLAYMODE mode;
hr = d3d->GetAdapterDisplayMode(D3DADAPTER_DEFAULT, &mode);
if (FAILED(hr)){
err("GetAdapterDisplayMode");
}
// create a destination surface.
out(mode.Width);
out(mode.Height);
out(mode.Format);
hr = d3ddev->CreateOffscreenPlainSurface(mode.Width,
mode.Height,
mode.Format,
D3DPOOL_SYSTEMMEM,
&pDestTarget,
nullptr);
if (FAILED(hr)){
err(("CreateOffscreenPlainSurface"));
}
out("5");
//copy the render target to the destination surface.
hr = d3ddev->GetRenderTargetData(pRenderTarget, pDestTarget);
if (FAILED(hr)){
err(DXGetErrorDescription9A(hr));
err(("GetRenderTargetData"));
}
//save its contents to a bitmap file.
out("6");
hr = D3DXSaveSurfaceToFile(file,
D3DXIFF_BMP,
pDestTarget,
nullptr,
nullptr);
out("7");
// clean up.
pRenderTarget->Release();
pDestTarget->Release();
}
p.s: Do I understand correctly that when using GetDesktopWindow() am I capturing a desktop containing an image of the entire screen?
Some help in solving this problem is needed

Updating Texture2D frequently causes process to crash (UpdateSubresource)

I am using SharpDX to basically render browser (chromium) output buffer on directX process.
Process is relatively simple, I intercept CEF buffer (by overriding OnPaint method) and write that to a texture2D.
Code is relatively simple:
Texture creation:
public void BuildTextureWrap() {
var oldTexture = texture;
texture = new D3D11.Texture2D(DxHandler.Device, new D3D11.Texture2DDescription() {
Width = overlay.Size.Width,
Height = overlay.Size.Height,
MipLevels = 1,
ArraySize = 1,
Format = DXGI.Format.B8G8R8A8_UNorm,
SampleDescription = new DXGI.SampleDescription(1, 0),
Usage = D3D11.ResourceUsage.Default,
BindFlags = D3D11.BindFlags.ShaderResource,
CpuAccessFlags = D3D11.CpuAccessFlags.None,
OptionFlags = D3D11.ResourceOptionFlags.None,
});
var view = new D3D11.ShaderResourceView(
DxHandler.Device,
texture,
new D3D11.ShaderResourceViewDescription {
Format = texture.Description.Format,
Dimension = D3D.ShaderResourceViewDimension.Texture2D,
Texture2D = { MipLevels = texture.Description.MipLevels },
}
);
textureWrap = new D3DTextureWrap(view, texture.Description.Width, texture.Description.Height);
if (oldTexture != null) {
obsoleteTextures.Add(oldTexture);
}
}
That piece of code is executed at start and when resize is happening.
Now when CEF OnDraw I basically copy their buffer to texture:
var destinationRegion = new D3D11.ResourceRegion {
Top = Math.Min(r.dirtyRect.y, texDesc.Height),
Bottom = Math.Min(r.dirtyRect.y + r.dirtyRect.height, texDesc.Height),
Left = Math.Min(r.dirtyRect.x, texDesc.Width),
Right = Math.Min(r.dirtyRect.x + r.dirtyRect.width, texDesc.Width),
Front = 0,
Back = 1,
};
// Draw to the target
var context = targetTexture.Device.ImmediateContext;
context.UpdateSubresource(targetTexture, 0, destinationRegion, sourceRegionPtr, rowPitch, depthPitch);
There are some more code out there but basically this is only relevant piece. Whole thing works until OnDraw happens frequently.
Apparently if I force CEF to Paint frequently, whole host process dies.
This is happening at UpdateSubresource.
So my question is, is there another, safer way to do this? (Update texture frequently)
Solution to this problem was relatively simple yet not so obvious at the beginning.
I simply moved the code responsible for updating texture inside render loop and just keep internal buffer pointer cached.

How to simply render ID3D11Texture2D

I am doing a D2D/D3D interoperability program. After getting ID3D11Texture2D from the outside, it is rendered in a designated area. I tried CreateDxgiSurfaceRenderTarget but no effect
code below, The code runs "normally", there are no errors and debugging information is displayed, But the interface shows a black screen. If you ignore the param texture, changing to only _back_render_target->FillRectange is effective
HRESULT CGraphRender::DrawTexture(ID3D11Texture2D* texture, const RECT& dst_rect)
{
float dpi = GetDpiFromD2DFactory(_d2d_factory);
CComPtr<ID3D11Texture2D> temp_texture2d;
CComPtr<ID2D1RenderTarget> temp_render_target;
CComPtr<ID2D1Bitmap> temp_bitmap;
D3D11_TEXTURE2D_DESC desc = { 0 };
texture->GetDesc(&desc);
CD3D11_TEXTURE2D_DESC capture_texture_desc(DXGI_FORMAT_B8G8R8A8_UNORM, desc.Width, desc.Height, 1, 1, D3D11_BIND_RENDER_TARGET);
HRESULT hr = _d3d_device->CreateTexture2D(&capture_texture_desc, nullptr, &temp_texture2d);
RETURN_ON_FAIL(hr);
CComPtr<IDXGISurface> dxgi_capture;
hr = temp_texture2d->QueryInterface(IID_PPV_ARGS(&dxgi_capture));
RETURN_ON_FAIL(hr);
D2D1_RENDER_TARGET_PROPERTIES rt_props = D2D1::RenderTargetProperties(D2D1_RENDER_TARGET_TYPE_DEFAULT, D2D1::PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED), dpi, dpi);
hr = _d2d_factory->CreateDxgiSurfaceRenderTarget(dxgi_capture, rt_props, &temp_render_target);
RETURN_ON_FAIL(hr);
D2D1_BITMAP_PROPERTIES bmp_prop = D2D1::BitmapProperties(D2D1::PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED), dpi, dpi);
hr = temp_render_target->CreateBitmap(D2D1::SizeU(desc.Width, desc.Height), bmp_prop, &temp_bitmap);
RETURN_ON_FAIL(hr);
CComPtr<ID3D11DeviceContext> immediate_context;
_d3d_device->GetImmediateContext(&immediate_context);
if (!immediate_context)
{
return E_UNEXPECTED;
}
immediate_context->CopyResource(temp_texture2d, texture);
D2D1_POINT_2U src_point = D2D1::Point2U();
D2D1_RECT_U src_rect = D2D1::RectU(0, 0, desc.Width, desc.Height);
hr = temp_bitmap->CopyFromRenderTarget(&src_point, temp_render_target, &src_rect);
RETURN_ON_FAIL(hr);
D2D1_RECT_F d2d1_rect = { (float)dst_rect.left, (float)dst_rect.top, (float)dst_rect.right, (float)dst_rect.bottom};
_back_render_target->DrawBitmap(temp_bitmap, d2d1_rect);
return S_OK;
}
Refer to DirectXTK, DirectXTK's SpritBatch code has a brief look. It is necessary to introduce a bunch of context settings. I don’t understand the relationship yet. I don’t know. Does the existing code have any influence, such as setting TargetView, ViewPort or even Shader.
Is there a relatively simple and effective method, as simple and violent as ID2D1RenderTarget's DrawBitmap?

Failure to create EGLSurface using the RenderResolutionScale property on Windows

I'm trying to create an EGLSurface in a Windows UWP app. The creation code is in a xaml.cpp file, as shown below.
When I try creating the surface using the optional property EGLRenderResolutionScaleProperty, it fails with an EGL_BAD_ALLOC error. Two alternate approaches work, but I need to try to use the resolution scale option for my app.
void MyClass::CreateRenderSurface()
{
if (mRenderSurface == EGL_NO_SURFACE)
{
// NOTE: in practice, I only have one of the three following implementations in the code;
// all are included together here for ease of comparison.
// 1. This works
mRenderSurface = CreateSurface(mSwapChainPanel, nullptr, nullptr);
// 2. and this works (here I hardwired the size to twice the
// the size of the window I happen to be using, because
// Windows display settings is set at 200%)
Size size;
size.Height = 1448; // hardwired value for testing, in this case window height is 724 pix
size.Width = 1908; // hardwired value for testing, in this case window width is 954 pix
mRenderSurface = CreateSurface(mSwapChainPanel, &size, nullptr);
// 3. but this fails (and this is the one I want to use)
float resolutionScale = 1.0;
mRenderSurface = CreateSurface(mSwapChainPanel, nullptr, &resolutionScale);
}
}
EGLSurface MyClass::CreateSurface(SwapChainPanel^ panel, const Size* renderSurfaceSize, const float* resolutionScale)
{
if (!panel)
{
throw Exception::CreateException(E_INVALIDARG, L"SwapChainPanel parameter is invalid");
}
if (renderSurfaceSize != nullptr && resolutionScale != nullptr)
{
throw Exception::CreateException(E_INVALIDARG, L"A size and a scale can't both be specified");
}
EGL _egl = this->HelperClass->GetEGL();
EGLSurface surface = EGL_NO_SURFACE;
const EGLint surfaceAttributes[] =
{
EGL_ANGLE_SURFACE_RENDER_TO_BACK_BUFFER, EGL_TRUE,
EGL_NONE
};
// Create a PropertySet and initialize with the EGLNativeWindowType.
PropertySet^ surfaceCreationProperties = ref new PropertySet();
surfaceCreationProperties->Insert(ref new String(EGLNativeWindowTypeProperty), panel);
// If a render surface size is specified, add it to the surface creation properties
if (renderSurfaceSize != nullptr)
{
surfaceCreationProperties->Insert(ref new String(EGLRenderSurfaceSizeProperty), PropertyValue::CreateSize(*renderSurfaceSize));
}
// If a resolution scale is specified, add it to the surface creation properties
if (resolutionScale != nullptr)
{
surfaceCreationProperties->Insert(ref new String(EGLRenderResolutionScaleProperty), PropertyValue::CreateSingle(*resolutionScale));
}
surface = eglCreateWindowSurface(_egl._display, _egl._config, reinterpret_cast<IInspectable*>(surfaceCreationProperties), surfaceAttributes);
EGLint err = eglGetError();
if (surface == EGL_NO_SURFACE)
{
throw Exception::CreateException(E_FAIL, L"Failed to create EGL surface");
}
return surface;
}
where
const wchar_t EGLNativeWindowTypeProperty[] = L"EGLNativeWindowTypeProperty";
const wchar_t EGLRenderSurfaceSizeProperty[] = L"EGLRenderSurfaceSizeProperty";
const wchar_t EGLRenderResolutionScaleProperty[] = L"EGLRenderResolutionScaleProperty";
I have tried changing the cast of the EGLNativeWindowType argument (as in How to create EGLSurface using C++/WinRT and ANGLE?) - that only creates other problems. As indicated, this code does work to create a surface in the basic case, just not when using the EGLRenderResolutionScaleProperty.
My guess is that something about the way I'm supplying that property is failing, because it fails on what should be reasonable values (e.g., 1.0).
Solved this by first checking that swapChainPanel size is not zero:
void MyClass::CreateRenderSurface()
{
if (mRenderSurface == EGL_NO_SURFACE)
{
if (0 == mSwapChainPanel->ActualHeight || 0 == mSwapChainPanel->ActualWidth)
{
mRenderSurface = CreateSurface(mSwapChainPanel, nullptr, &resolutionScale);
}
}
}
(The code checks elsewhere whether the render surface has been created, and will call this again if needed.)
Interestingly, the original code that used nullptr for both size and resolution arguments (case 1 in original snippet above) didn't need that check.

ttf Text wont show up in SDL

No matter what i try i cant get my text to load into a texture in SDL 2.0 using SDL_ttf.
Here is my textToTexture code
void sdlapp::textToTexture(string text, SDL_Color textColor,SDL_Texture* textTexture)
{
//free prevoius texture in textTexture if texture exists
if (textTexture != nullptr || NULL)
{
SDL_DestroyTexture(textTexture);
}
SDL_Surface* textSurface = TTF_RenderText_Solid(m_font, text.c_str(), textColor);
textTexture = SDL_CreateTextureFromSurface(m_renderer, textSurface);
//free surface
SDL_FreeSurface(textSurface);
}
And then here is me loading the texture and text
bool sdlapp::loadMedia()
{
bool success = true;
//load media here
//load font
m_font = TTF_OpenFont("Fonts/MotorwerkOblique.ttf", 28);
//load text
SDL_Color textColor = { 0x255, 0x255, 0x235 };
textToTexture("im a texture thing", textColor, m_font_texture);
return success;
}
And then this is the code i am using to render it
void sdlapp::render()
{
//clear the screen
SDL_RenderClear(m_renderer);
//do render stuff here
SDL_Rect rect= { 32, 64, 128, 32 };
SDL_RenderCopy(m_renderer, m_font_texture, NULL, NULL);
//update the screen to the current render
SDL_RenderPresent(m_renderer);
}
Does anyone know what i am doing wrong?
Thanks in Advance, JustinWeq.
textToTexture renders the text with SDL_ttf, the resulting SDL_Texture address is then assigned to a variable called textTexture. Problem is, textTexture is a local variable pointing to the same address as m_font_texture. They're not the same variable, they're different variables poiting to the same place, thus you're not changing any callee variables.
For clarification on pointers, I'd recommend seeing question 4.8 of the C-FAQ
I'd make textToTexture return the new texture address, and don't bother freeing resources that are not managed by it (m_font_texture belongs to sdlapp, it should be managed by it).

Resources