Making a Cubemap in DX11 from 6 Textures - directx-11

I want to load in a skysphere, I know how to do it using a dds file, but I want to try and do it from 6 separate texture files.
The problem I have is that when I load in the texturecube, only 3 separate textures are visible, the other 3 are not. I'll show you my code and how it looks in Nsight. I am right now using just 6 different uniform colored png files, they are all 512x512 in size.
std::vector<std::string> paths = { "../Resources/Textures/posX.png", "../Resources/Textures/negX.png",
"../Resources/Textures/posY.png",
"../Resources/Textures/negY.png",
"../Resources/Textures/posZ.png", "../Resources/Textures/negZ.png" };
ID3D11Texture2D* cubeTexture = NULL;
WRL::ComPtr<ID3D11ShaderResourceView> shaderResourceView = NULL;
//Description of each face
D3D11_TEXTURE2D_DESC texDesc = {};
texDesc.Width = 512;
texDesc.Height = 512;
texDesc.MipLevels = 1;
texDesc.ArraySize = 6;
texDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
texDesc.CPUAccessFlags = 0;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D11_USAGE_DEFAULT;
texDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
texDesc.CPUAccessFlags = 0;
texDesc.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE;
//The Shader Resource view description
D3D11_SHADER_RESOURCE_VIEW_DESC SMViewDesc = {};
SMViewDesc.Format = texDesc.Format;
SMViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBE;
SMViewDesc.TextureCube.MipLevels = texDesc.MipLevels;
SMViewDesc.TextureCube.MostDetailedMip = 0;
D3D11_SUBRESOURCE_DATA pData[6] = {};
for (int i = 0; i < 6; i++)
{
ID3D11Resource* res = nullptr;
std::wstring pathWString(paths[j].begin(), paths[j].end());
HRESULT hr = DirectX::CreateWICTextureFromFileEx(Renderer::getDevice(), pathWString.c_str(), 0, D3D11_USAGE_STAGING, 0, D3D11_CPU_ACCESS_READ, 0,
WIC_LOADER_FLAGS::WIC_LOADER_DEFAULT,
&res, 0);
assert(SUCCEEDED(hr));
D3D11_MAPPED_SUBRESOURCE destRes = {};
Renderer::getContext()->Map(res, 0, D3D11_MAP_READ, 0, &destRes);
pData[i].pSysMem = destRes.pData;
pData[i].SysMemPitch = destRes.RowPitch;
pData[i].SysMemSlicePitch = destRes.DepthPitch;
Renderer::getContext()->Unmap(res, 0);
RELEASE_COM(res);
}
Renderer::getDevice()->CreateTexture2D(&texDesc, &pData[0], &cubeTexture);
Renderer::getDevice()->CreateShaderResourceView(cubeTexture, &SMViewDesc, shaderResourceView.GetAddressOf());
When graphics debugging, this is what the cubemap looks like, 3 textures are loaded in twice, overwriting the other 3.
When reading the documentation it says subresource should be relating to mip levels.
If I loop 9 times instead of 6, the other 3 images are shown instead of these current 3. All 6 should have unique colors.
What I'm doing in the code is creating a Texture2D Description, a Shader Resource View Descriptuion, then I try to fetch data from imported images using WIC, I put it in a resource then map that to a subresource struct.
When looking at the addresses of the subresource, all 6 are always unique so it seems they load in the textures correctly, I have tried moving around the rowpitch, changing the size of the image but it only seems to affect the single images inside the textureCube, it doesn't seem to move around the duplicates if you understand what I mean.
Any help is greatly appreciated.

So I found some code from a Frank D Luna example doing something else.
Here is the code I use that works, mip levels had to be taken into account.
I hope this helps if someone in the future has a similar issue.
ID3D11Texture2D* cubeTexture = NULL;
WRL::ComPtr<ID3D11ShaderResourceView> shaderResourceView = NULL;
//Description of each face
D3D11_TEXTURE2D_DESC texDesc = {};
D3D11_TEXTURE2D_DESC texDesc1 = {};
//The Shader Resource view description
D3D11_SHADER_RESOURCE_VIEW_DESC SMViewDesc = {};
ID3D11Texture2D* tex[6] = { nullptr, nullptr, nullptr,nullptr, nullptr, nullptr };
for (int i = 0; i < 6; i++)
{
std::wstring pathWString(paths[i].begin(), paths[i].end());
HRESULT hr = DirectX::CreateWICTextureFromFileEx(Renderer::getDevice(), pathWString.c_str(), 0, D3D11_USAGE_STAGING, 0, D3D11_CPU_ACCESS_READ| D3D11_CPU_ACCESS_WRITE, 0,
WIC_LOADER_FLAGS::WIC_LOADER_DEFAULT,
(ID3D11Resource**)&tex[i], 0);
assert(SUCCEEDED(hr));
}
tex[0]->GetDesc(&texDesc1);
texDesc.Width = texDesc1.Width;
texDesc.Height = texDesc1.Height;
texDesc.MipLevels = texDesc1.MipLevels;
texDesc.ArraySize = 6;
texDesc.Format = texDesc1.Format;
texDesc.CPUAccessFlags = 0;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D11_USAGE_DEFAULT;
texDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
texDesc.CPUAccessFlags = 0;
texDesc.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE;
SMViewDesc.Format = texDesc.Format;
SMViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBE;
SMViewDesc.TextureCube.MipLevels = texDesc.MipLevels;
SMViewDesc.TextureCube.MostDetailedMip = 0;
Renderer::getDevice()->CreateTexture2D(&texDesc, NULL, &cubeTexture);
for (int i = 0; i < 6; i++)
{
for (UINT mipLevel = 0; mipLevel < texDesc.MipLevels; ++mipLevel)
{
D3D11_MAPPED_SUBRESOURCE mappedTex2D;
HRESULT hr = (Renderer::getContext()->Map(tex[i], mipLevel, D3D11_MAP_READ, 0, &mappedTex2D));
assert(SUCCEEDED(hr));
Renderer::getContext()->UpdateSubresource(cubeTexture,
D3D11CalcSubresource(mipLevel, i, texDesc.MipLevels),
0, mappedTex2D.pData, mappedTex2D.RowPitch, mappedTex2D.DepthPitch);
Renderer::getContext()->Unmap(tex[i], mipLevel);
}
}
for (int i = 0; i < 6; i++)
{
RELEASE_COM(tex[i]);
}
Renderer::getDevice()->CreateShaderResourceView(cubeTexture, &SMViewDesc, shaderResourceView.GetAddressOf());

Related

Performance loss with CopyResource() and then Map()/Unmap()

The problem is that if you do not use these methods, then the FPS differs by about 2 times in a big way. For example, I had about 5000 fps in a 3d scene. And it became about 2500. I know that the problem is that the application is waiting for the copy to wait. But it's only 4 bytes... If you use the D3D11_MAP_FLAG_DO_NOT_WAIT flag, Map() will always return DXGI_ERROR_WAS_STILL_DRAWING. What can be done so that I can use this method without losing fps? Here is my code:
Init
D3D11_BUFFER_DESC outputDesc;
outputDesc.Usage = D3D11_USAGE_DEFAULT;
outputDesc.ByteWidth = sizeof(float);
outputDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS;
outputDesc.CPUAccessFlags = 0;
outputDesc.StructureByteStride = sizeof(float);
outputDesc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED;
FOG_TRACE(mDevice->CreateBuffer(&outputDesc, nullptr, &outputBuffer));
outputDesc.Usage = D3D11_USAGE_STAGING;
outputDesc.BindFlags = 0;
outputDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
FOG_TRACE(mDevice->CreateBuffer(&outputDesc, nullptr, &outputResultBuffer));
D3D11_UNORDERED_ACCESS_VIEW_DESC uavDesc{};
uavDesc.Buffer.FirstElement = 0;
uavDesc.Buffer.Flags = D3D11_BUFFER_UAV_FLAG_APPEND;
uavDesc.Buffer.NumElements = 1;
uavDesc.Format = DXGI_FORMAT_UNKNOWN;
uavDesc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER;
FOG_TRACE(mDevice->CreateUnorderedAccessView(outputBuffer, &uavDesc, &unorderedAccessView));
Update
const UINT offset = 0;
mDeviceContext->OMSetRenderTargetsAndUnorderedAccessViews(1, &mRenderTargetView, mDepthStencilView, 1, 1, &unorderedAccessView, &offset);
mDeviceContext->ClearDepthStencilView(mDepthStencilView, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
ObjectManager::Draw();
mDeviceContext->CopyResource(outputResultBuffer, outputBuffer);
D3D11_MAPPED_SUBRESOURCE mappedBuffer;
HRESULT hr;
FOG_TRACE(hr = mDeviceContext->Map(outputResultBuffer, 0, D3D11_MAP_READ, 0/*D3D11_MAP_FLAG_DO_NOT_WAIT*/, &mappedBuffer));
if (SUCCEEDED(hr))
{
float* copy = (float*)(mappedBuffer.pData);
OutputDebugString(String::ToStr(*copy) + L"\n");
}
mDeviceContext->Unmap(outputResultBuffer, 0);
const UINT var[4]{};
mDeviceContext->ClearUnorderedAccessViewUint(unorderedAccessView, var);
I've already profiled and checked everything possible, the problem is exactly in pending. I would be very grateful if someone could explain everything in detail :)
The problem was solved very simply but for a long time! I just didn't make copy calls until I had read past data. Here is a small crutch:
static bool isWait = false;
if (!isWait)
{
mDeviceContext->CopyResource(outputResultBuffer, outputBuffer);
}
D3D11_MAPPED_SUBRESOURCE mappedBuffer;
HRESULT hr;
FOG_TRACE(hr = mDeviceContext->Map(outputResultBuffer, 0, D3D11_MAP_READ, D3D11_MAP_FLAG_DO_NOT_WAIT, &mappedBuffer));
if (SUCCEEDED(hr))
{
float* copy = (float*)(mappedBuffer.pData);
OutputDebugString(String::ToStr(*copy) + L"\n");
mDeviceContext->Unmap(outputResultBuffer, 0);
const UINT var[4]{};
mDeviceContext->ClearUnorderedAccessViewUint(unorderedAccessView, var);
isWait = false;
}
else
{
isWait = true;
}

How to get IDXGISurface data from IDXGISwapChain1

I use dxgi to capture window, since the windows maybe resized, so I use IDXGISwapChain1 and recreate framepool, but now I can't get the surface since GetRestrictToOutput always return null.
void OnFrameArrived(winrt::Windows::Graphics::Capture::Direct3D11CaptureFramePool const &sender,winrt::Windows::Foundation::IInspectable const &)
{
const winrt::Windows::Graphics::Capture::Direct3D11CaptureFrame frame = sender.TryGetNextFrame();
auto swapChainResizedToFrame = TryResizeSwapChain(frame);
winrt::com_ptr<ID3D11Texture2D> backBuffer;
winrt::check_hresult(m_swapChain->GetBuffer(0, winrt::guid_of<ID3D11Texture2D>(), backBuffer.put_void()));
winrt::com_ptr<ID3D11Texture2D> frameSurface = GetDXGIInterfaceFromObject<ID3D11Texture2D>(frame.Surface());
m_d3dContext->CopyResource(backBuffer.get(), frameSurface.get());
DXGI_PRESENT_PARAMETERS presentParameters{};
m_swapChain->Present1(1, 0, &presentParameters);
//winrt::com_ptr<IDXGIOutput> dxgiOutput = nullptr;
IDXGIOutput* dxgiOutput = nullptr;
BOOL full;
m_swapChain->GetRestrictToOutput(&dxgiOutput);
if(dxgiOutput!=nullptr){
std::cerr <<"33333333333333"<<std::endl;
...
}
m_framePool.Recreate(m_device, m_pixelFormat, 2, m_lastSize);
If I use getbuffer to get the image data, if the window size is changed, then the image is black, the code is as below:
winrt::com_ptr<ID3D11Texture2D> renderBuffer;
winrt::check_hresult(m_swapChain->GetBuffer(0, winrt::guid_of<ID3D11Texture2D>(), renderBuffer.put_void()));
if(renderBuffer != nullptr){
D3D11_TEXTURE2D_DESC desc;
renderBuffer->GetDesc(&desc);
desc.Usage = D3D11_USAGE_STAGING;
desc.BindFlags = 0;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
desc.MiscFlags = 0;
desc.MipLevels = 1;
desc.ArraySize = 1;
desc.SampleDesc.Count = 1;
winrt::com_ptr<ID3D11Texture2D> textureCopy;
auto d3dDevice = GetDXGIInterfaceFromObject<ID3D11Device>(m_device);
winrt::check_hresult(d3dDevice->CreateTexture2D(&desc, nullptr, textureCopy.put()));
m_d3dContext->CopyResource(textureCopy.get(), renderBuffer.get());
winrt::com_ptr<IDXGISurface> dxgi_surface = nullptr;
HRESULT hr = textureCopy->QueryInterface(__uuidof(IDXGISurface), (void **)(&dxgi_surface));
DXGI_MAPPED_RECT mapped_rect;
hr = dxgi_surface->Map(&mapped_rect, DXGI_MAP_READ);
unsigned int imgSize = desc.Width * desc.Height * 4;
uint8_t* buffer = new uint8_t[imgSize];
int dst_rowpitch = desc.Width * 4;
for (unsigned int h = 0; h < desc.Height; h++) {
memcpy_s(buffer + h * dst_rowpitch, dst_rowpitch, (BYTE*)mapped_rect.pBits + h * mapped_rect.Pitch, min(mapped_rect.Pitch, dst_rowpitch));
}
dxgi_surface->Unmap();

rgb32 data resource mapping. using directx memcpy

I have been trying to solve the problem for a month with googling.
But Now I have to ask for help here.
I want to render using ffmpeg decoded frame.
and using frame(it converted to RGB32 format), I try to render frame with DX2D texture.
ZeroMemory(&TextureDesc, sizeof(TextureDesc));
TextureDesc.Height = pFrame->height;
TextureDesc.Width = pFrame->width;
TextureDesc.MipLevels = 1;
TextureDesc.ArraySize = 1;
TextureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; //size 16
TextureDesc.SampleDesc.Count = 1;
TextureDesc.SampleDesc.Quality = 0;
TextureDesc.Usage = D3D11_USAGE_DYNAMIC;
TextureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
TextureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
TextureDesc.MiscFlags = 0;
result = m_device->CreateTexture2D(&TextureDesc, NULL, &m_2DTex);
if (FAILED(result)) return false;
ShaderResourceViewDesc.Format = TextureDesc.Format;
ShaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
ShaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
ShaderResourceViewDesc.Texture2D.MipLevels = 1;
D3D11_MAPPED_SUBRESOURCE S_mappedResource_tt = { 0, };
ZeroMemory(&S_mappedResource_tt, sizeof(D3D11_MAPPED_SUBRESOURCE));
result = m_deviceContext->Map(m_2DTex, 0, D3D11_MAP_WRITE_DISCARD, 0, &S_mappedResource_tt);
if (FAILED(result)) return false;
BYTE* mappedData = reinterpret_cast<BYTE *>(S_mappedResource_tt.pData);
for (auto i = 0; i < pFrame->height; ++i) {
memcpy(mappedData, pFrame->data, pFrame->linesize[0]);
mappedData += S_mappedResource_tt.RowPitch;
pFrame->data[0] += pFrame->linesize[0];
}
m_deviceContext->Unmap(m_2DTex, 0);
result = m_device->CreateShaderResourceView(m_2DTex, &ShaderResourceViewDesc, &m_ShaderResourceView);
if (FAILED(result)) return false;
m_deviceContext->PSSetShaderResources(0, 1, &m_ShaderResourceView);
but it shows me just black screen(nothing render).
I guess it's wrong memcpy size.
The biggest problem is that I don't know what is the problem.
Question 1 :
It has any problem creating 2D texture for mapping?
Question 2 :
What size of the memcpy parameters should I enter (related to formatting)?
I based on the link below.
[1]https://www.gamedev.net/forums/topic/667097-copy-2d-array-into-texture2d/
[2]https://www.gamedev.net/forums/topic/645514-directx-11-maping-id3d11texture2d/
[3]https://www.gamedev.net/forums/topic/606100-solved-dx11-updating-texture-data/
Thank U for watching, Please reply.
Nobody reply. I solved my issue.
I have modified some code and I'm not sure if it solves the problem. The problem with the black screen Reason is my matrix.
D3D11_TEXTURE2D_DESC TextureDesc;
D3D11_RENDER_TARGET_VIEW_DESC RenderTargetViewDesc;
D3D11_SHADER_RESOURCE_VIEW_DESC ShaderResourceViewDesc;
ZeroMemory(&TextureDesc, sizeof(TextureDesc));
TextureDesc.Height = pFrame->height;
TextureDesc.Width = pFrame->width;
TextureDesc.MipLevels = 1;
TextureDesc.ArraySize = 1;
TextureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;/*DXGI_FORMAT_R8G8B8A8_UNORM_SRGB;*/ //size 32bit
TextureDesc.SampleDesc.Count = 1;
TextureDesc.SampleDesc.Quality = 0;
TextureDesc.Usage = D3D11_USAGE_DYNAMIC;
TextureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
TextureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
TextureDesc.MiscFlags = 0;
DWORD* pInitImage = new DWORD[pFrame->width*pFrame->height];
memset(pInitImage, 0, sizeof(DWORD)*pFrame->width*pFrame->height);
D3D11_SUBRESOURCE_DATA InitData;
InitData.pSysMem = pInitImage;
InitData.SysMemPitch = pFrame->width*sizeof(DWORD);
InitData.SysMemSlicePitch = 0;
result = m_device->CreateTexture2D(&TextureDesc, &InitData, &m_2DTex);
if (FAILED(result)) return false;
ShaderResourceViewDesc.Format = TextureDesc.Format;
ShaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
ShaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
ShaderResourceViewDesc.Texture2D.MipLevels = 1;
result = m_device->CreateShaderResourceView(m_2DTex, &ShaderResourceViewDesc, &m_ShaderResourceView);
if (FAILED(result)) return false;
D3D11_MAPPED_SUBRESOURCE S_mappedResource_tt;
ZeroMemory(&S_mappedResource_tt, sizeof(S_mappedResource_tt));
DWORD Stride = pFrame->linesize[0];
result = m_deviceContext->Map(m_2DTex, 0, D3D11_MAP_WRITE_DISCARD, 0, &S_mappedResource_tt);
if (FAILED(result)) return false;
BYTE * pFrameData = pFrame->data[0]; // now we have a pointer that points to begin of the destination buffer
BYTE* mappedData = (BYTE *)S_mappedResource_tt.pData;// +S_mappedResource_tt.RowPitch;
for (auto i = 0; i < pFrame->height; i++) {
memcpy(mappedData, pFrameData, Stride);
mappedData += S_mappedResource_tt.RowPitch;
pFrameData += Stride;
}
m_deviceContext->Unmap(m_2DTex, 0);
It works vell. I hope that it will be helpful to those who are doing the same thing with me.

Create HBitmap in IExtractImage

I'm developping a thumbnail creator as a shell extension.
To do that, I choosed to implement interface IExtractImage.
My dll is loaded and correctly called, but, the thumbnail is always black, instead of being red.
What am I missing?
class MyShellPreview : public IExtractImage, IPersistFile
// set by IExtractImage::GetLocation
SIZE m_size;
IFACEMETHODIMP Extract(HBITMAP *phBmpImage)
{
size_t size = m_size.cx * m_size.cy * 3;
// alloc buffer
BYTE *buffer = (BYTE*)malloc(size);
// fill buffer
for (k = i = 0; i < m_size.cx; ++i)
{
for (j = 0; j < m_size.cy; ++j, ++k)
{
buffer[k] = 128;
buffer[k+1] = 0;
buffer[k+2] = 0;
}
}
*phBmpImage = CreateBitmap(m_size.cx, m_size.cy, 3, 8, buffer);
free(buffer);
return S_OK;
}
};
I know that for performance reason, I should use CreateCompatibleBitmap and SetDIBits, but I'm not sure from where I should get an HDC.
As pointed on comments, the solution was to use:
32 bits images
1 color plane
class MyShellPreview : public IExtractImage, IPersistFile
// set by IExtractImage::GetLocation
SIZE m_size;
IFACEMETHODIMP Extract(HBITMAP *phBmpImage)
{
size_t size = m_size.cx * m_size.cy * 4;
// alloc buffer
BYTE *buffer = (BYTE*)calloc(size, 1);
// fill buffer
for (k = i = 0; i < m_size.cx; ++i)
{
for (j = 0; j < m_size.cy; ++j, k+=4)
{
buffer[k] = 128;
}
}
*phBmpImage = CreateBitmap(m_size.cx, m_size.cy, 1, 24, buffer);
free(buffer);
return S_OK;
}
};

Direct2D Create SwapChain

I am trying to program a Direct2D desktop app based on a Windows tutorial, but am having problems creating a SwapChain1. In the code below everything gets initialized until the CreateSwapChainForHwnd. The pointer m_pDXGISwapChain1 stays NULL. All the pointers except pOutput are ComPtrs.
D2D1_FACTORY_OPTIONS options;
ZeroMemory(&options, sizeof(D2D1_FACTORY_OPTIONS));
HRESULT hr = D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED,
__uuidof(ID2D1Factory1), &options, &m_pD2DFactory1);
if(SUCCEEDED(hr))
{
UINT creationFlags = D3D11_CREATE_DEVICE_BGRA_SUPPORT;
D3D_FEATURE_LEVEL featureLevels[] = { D3D_FEATURE_LEVEL_11_1, D3D_FEATURE_LEVEL_11_0 };
hr = D3D11CreateDevice(nullptr, D3D_DRIVER_TYPE_HARDWARE, 0, creationFlags,
featureLevels, ARRAYSIZE(featureLevels), D3D11_SDK_VERSION, &m_pD3DDevice,
&m_featureLevel, &m_pD3DDeviceContext);
}
if(SUCCEEDED(hr))
hr = m_pD3DDevice.As(&m_pDXGIDevice1);
if(SUCCEEDED(hr))
hr = m_pD2DFactory1->CreateDevice(m_pDXGIDevice1.Get(), &m_pD2DDevice);
if(SUCCEEDED(hr))
hr = m_pD2DDevice->CreateDeviceContext(D2D1_DEVICE_CONTEXT_OPTIONS_NONE, &m_pD2DDeviceContext);
if(SUCCEEDED(hr))
hr = m_pDXGIDevice1->GetAdapter(&m_pDXGIAdapter);
if(SUCCEEDED(hr))
hr = m_pDXGIAdapter->GetParent(IID_PPV_ARGS(&m_pDXGIFactory2));
DXGI_SWAP_CHAIN_DESC1 swapChainDesc1 = {0};
swapChainDesc1.Width = 0;
swapChainDesc1.Height = 0;
swapChainDesc1.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
swapChainDesc1.Stereo = false;
swapChainDesc1.SampleDesc.Count = 1;
swapChainDesc1.SampleDesc.Quality = 0;
swapChainDesc1.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc1.BufferCount = 2;
swapChainDesc1.Scaling = DXGI_SCALING_NONE;
swapChainDesc1.SwapEffect = DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL;
swapChainDesc1.AlphaMode = DXGI_ALPHA_MODE_IGNORE;
swapChainDesc1.Flags = 0;
IDXGIOutput *pOutput;
m_pDXGIAdapter->EnumOutputs(0, &pOutput);
if(SUCCEEDED(hr))
hr = m_pDXGIFactory2->CreateSwapChainForHwnd(
static_cast<IUnknown*>(m_pD3DDevice.Get()), m_hwnd, &swapChainDesc1,
NULL, pOutput, &m_pDXGISwapChain1);
if(SUCCEEDED(hr))
hr = m_pDXGIDevice1->SetMaximumFrameLatency(1);
if(SUCCEEDED(hr))
hr = m_pDXGISwapChain1->GetBuffer(0, IID_PPV_ARGS(&m_pDXGIBackBuffer));
If all your pointers are ComPtr, then the call should look like this:
ComPtr<ID3D11Device> d3dDevice;
ComPtr<IDXGIFactory2> dxgiFactory;
// assuming d3dDevice and dxgiFactory are initialized correctly:
ComPtr<IDXGISwapChain1> swapChain;
dxgiFactory->CreateSwapChainForHwnd(d3dDevice.Get(), hWnd, &swapChainDescription, nullptr, nullptr, swapChain.GetAddressOf())
As for your swap chain description, if you're making a non-Windows Store App, you should set
swapChainDescription.Scaling = DXGI_SCALING_STRETCH;
swapChainDescription.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
Both of those values are 0, so you can leave them out.
Here is the complete swap chain description that I use:
DXGI_SWAP_CHAIN_DESC1 swapChainDescription = {};
swapChainDescription.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
swapChainDescription.SampleDesc.Count = 1;
swapChainDescription.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDescription.BufferCount = 2;
See this article for full a walkthrough of how to set up Direct2D 1.1 properly - including the CreateSwapChainForHwnd call: http://msdn.microsoft.com/en-us/magazine/dn198239.aspx
Direct2D don't have SwapChain at user level, SwapChain is for Direct3D, I see some DirectX 11 code in your post, do you really want Direct2D? or Direct3D?

Resources