What's the etymology behind type name ID3D10Blob for direct x shaders? - windows

What's the etymology behind type name ID3D10Blob for direct x shaders?
I'm reading http://www.directxtutorial.com/Lesson.aspx?lessonid=11-4-5 and trying to reason about the naming conventions in windows.
I see ID3D10Blob as (I)(D3D10)(Blob).
I = interface?
D3D10 = direct3d 10?
Blob = ?
I've seen "Binary large object" online for Blob, but I'm not sure if that has the same meaning in this context.
What does the blob term mean?

TL;DR: It's just a simple ref-counted container for a variable-length blob of binary data used by the D3DCompiler COM interfaces.
The HLSL compiler produces a 'shader blob' which is just an opaque binary object. It has a size and data. It could be really anything, but in the world of "COM" objects, it was implemented for Windows Vista as ID3D10Blob with the introduction of Direct3D 10.
Historically, Direct3D 9 and earlier had a 'fixed-function' rendering pipeline which means you could use it without HLSL shaders. For Direct3D 10, the 'fixed-function' was removed, so HLSL was required to use it at all. Therefore, a version of the Direct3D HLSL Compiler was added to the OS.
The ID3DBlob interface is what's used for Direct3D 11 or Direct3D 12, but if you look at it closely, it's the same thing.
typedef ID3D10Blob ID3DBlob;
The Direct3D API itself actually doesn't use this specific 'blob' interface. In a C++ STL world, you could use std::vector<uint8_t> as a shader blob:
inline std::vector<uint8_t> ReadData(_In_z_ const wchar_t* name)
{
std::ifstream inFile(name, std::ios::in | std::ios::binary | std::ios::ate);
if (!inFile)
throw std::exception("ReadData");
std::streampos len = inFile.tellg();
if (!inFile)
throw std::exception("ReadData");
std::vector<uint8_t> blob;
blob.resize(size_t(len));
inFile.seekg(0, std::ios::beg);
if (!inFile)
throw std::exception("ReadData");
inFile.read(reinterpret_cast<char*>(blob.data()), len);
if (!inFile)
throw std::exception("ReadData");
inFile.close();
return blob;
}
…
auto vertexShaderBlob = ReadData(L"VertexShader.cso");
ThrowIfFailed(
device->CreateVertexShader(vertexShaderBlob.data(), vertexShaderBlob.size(),
nullptr, m_spVertexShader.ReleaseAndGetAddressOf()));
auto pixelShaderBlob = ReadData(L"PixelShader.cso");
ThrowIfFailed(
device->CreatePixelShader(pixelShaderBlob.data(), pixelShaderBlob.size(),
nullptr, m_spPixelShader.ReleaseAndGetAddressOf()));
See Microsoft Docs and this blog post.

Related

rust imgui, how do you set it up?

I am trying to set up rust imgui for a custom renderer I am porting to rust.
I am stuck on two fronts, getting the peripheral callbacks, and the rendering.
In C++ the setup was farily simple
ImGuiContext* InitImgui(ModuleStorage::ModuleStorage& module, NECore::Gallery& gallery)
{
ImGuiContext* imgui_context = ImGui::CreateContext();
ImGuiIO& io = ImGui::GetIO();
unsigned char* pixels;
int width, height;
io.Fonts->GetTexDataAsRGBA32(&pixels, &width, &height);
CpuImage font_image(pixels, width, height, 4);
uint font_id = gallery.StoreImage<CpuImage::GetImageData>(
font_image, "__ImguiFont", NECore::ImageFormat::R8G8B8A8_UNORM);
io.Fonts->SetTexID((ImTextureID)(intptr_t)font_id);
ImGui_ImplGlfw_InitForVulkan(module.GetWindow().GetGLFWWindow(), true);
imgui_shader = module.AddShader(
{"./CommonShaders/imgui.vert",
"./CommonShaders/imgui.frag"});
return imgui_context;
}
30 lines of code and we have the initialization done.
Well some issues in rust, io.Fonts->GetTexDataAsRGBA32(&pixels, &width, &height); does not exist. I assume the equivalent is let font = fonts.build_rgba32_texture();
Assuming that's the case the next issue is setting the texture id, which I cannot find anywhere in the docs or the source code.
io.Fonts->SetTexID((ImTextureID)(intptr_t)font_id);
That function does not exist in the rust bindings. And ImGui_ImplGlfw_InitForVulkan is no where to be found either.
The examples https://github.com/imgui-rs/imgui-rs/blob/main/imgui-examples/examples/support/mod.rs
Seem to be using pre existen renderers and do not do a good job of showing how to integrate the tool onto an existing renderer other than the ones the author chose, which is baffling, one of the biggest selling points of imgui is how simple it is to integrate in pre-existing codebases.
I am at a loss, hwo do you bootstrap the library in rust?

Create a font resource from byte array on Win32

I have a byte array that contains the contents of a read in font file. I'd like WinAPI (No Gdi+) to create a font resource from it, so I could use it for rendering text.
I only know about AddFontResourceExW, that loads in a font resource from file, and AddFontMemResourceEx, which sounded like what I'd need, but it seems to me that it's still some resource-system thing and the data would have to be pre-associated with the program.
Can I somehow convert my loaded in byte-array into a font resource? (Possibly without writing it to a file and then calling AddFontResourceExW)
When you load a font from a resource script into memory, you use code like the following (you didn't add a language tag, so I'm using C/C++ code - let me know if that's a problem):
HANDLE H_myfont = INVALID_HANDLE_VALUE;
HINSTANCE hResInstance = ::GetModuleHandle(nullptr);
HRSRC ares = FindResource(hResInstance, MAKEINTRESOURCE(IDF_MYID), L"BINARY");
if (ares) {
HGLOBAL amem = LoadResource(hResInstance, ares);
if (amem != nullptr) {
void *adata = LockResource(amem);
DWORD nFonts = 0, len = SizeofResource(hResInstance, ares);
H_myfont = AddFontMemResourceEx(adata, len, nullptr, &nFonts);
}
}
The key line here is void *adata = LockResource(amem); - this converts the font resource loaded as an HGLOBAL into 'accessible memory' (documentation). Now, assuming your byte array is in the correct format (see below), you could probably just pass a pointer to it (as void*) in the call to AddFontMemResourceEx. (You can use your known array size in place of calling SizeofResource.)
I would suggest code something like this:
void *my_font_data = (void*)(font_byte_array); // Your byte array data
DWORD nFonts = 0, len = sizeof(font_byte_array);
H_myfont = AddFontMemResourceEx(my_font_data, len, nullptr, &nFonts);
which (hopefully) will give you a loaded and useable font resource.
When you're done with the font (which, once loaded, can be used just like any system-installed font), you can release it with:
RemoveFontMemResourceEx(H_myfont);
As I don't have your byte array, I can't (obviously) test this idea. However, if you do try it, please let us know if it works. (If it doesn't, there may be some other, relatively straightforward, steps that need to be added.)
NOTE: Although I can't say 100% what the exact format expected of a "font resource" is, the fact that code given above works (for me) with a resource defined in the .rc script as a BINARY with a normal, ".ttf" file, suggests that, if your byte array follows the format of a Windows Font File, then it should work. This is how I have included a font as an embedded resource:
IDF_MYFONT BINARY L"..\\Resource\\MyFont.ttf"

DXGI 1.5 DuplicateOutput1 fails with DXGI_ERROR_UNSUPPORTED (0x887a0004)

For some reason DuplicateOutput1 fails where DuplicateOutput does not.
#include <D3D11.h>
#include <DXGI1_5.h>
int main() {
ID3D11Device *device;
D3D_FEATURE_LEVEL levels[] = { D3D_FEATURE_LEVEL_11_1 };
D3D11CreateDevice(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, 0, levels, ARRAYSIZE(levels), D3D11_SDK_VERSION, &device, NULL, NULL);
IDXGIDevice *dxDevice;
device->QueryInterface<IDXGIDevice>(&dxDevice);
IDXGIAdapter *adapter;
dxDevice->GetAdapter(&adapter);
IDXGIOutput *output;
adapter->EnumOutputs(0, &output);
IDXGIOutput5 *output5;
output->QueryInterface<IDXGIOutput5>(&output5);
IDXGIOutputDuplication *outputDuplication;
auto hr1 = output5->DuplicateOutput(device, &outputDuplication);
S_OK here
const DXGI_FORMAT formats[] = { DXGI_FORMAT_B8G8R8A8_UNORM };
auto hr2 = output5->DuplicateOutput1(device, 0, ARRAYSIZE(formats), formats, &outputDuplication);
}
0x887a0004 : The specified device interface or feature level is not supported on this system.
I will post here the answer from #weggo, because I almost missed it!
For those that might stumble upon this in the future, calling
SetProcessDpiAwarenessContext(DPI_AWARENESS_CONTEXT_PER_MONITOR_AWARE_V2)
allows the DuplicateOutput1 to succeed. I have no idea why the
DuplicateOutput1 checks the process dpi version, though.
I will just add that you have to set DPI awareness to False in properties of the solution in manifest settings, to get the SetProcessDpiAwarenessContext to work :)
This could happen if you run on a system with both an integrated graphics chip and a discrete GPU. See https://support.microsoft.com/en-us/kb/3019314:
unfortunately this issue occurs because the Desktop Duplication API does not support being run against the discrete GPU on a Microsoft Hybrid system. By design, the call fails together with error code DXGI_ERROR_UNSUPPORTED in such a scenario.
To work around this issue, run the application on the integrated GPU instead of on the discrete GPU on a Microsoft Hybrid system.

Direct2D fails when drawing a single-channel bitmap

I'm an experienced programmer specialized in Computer Graphics, mainly using Direct3D 9.0c, OpenGL and general algorithms. Currently, I am evaluating Direct2D as rendering technology for a professional application dealing with medical image data. As for rendering, it is a x64 desktop application in windowed mode (not fullscreen).
Already with my very initial steps I struggle with a task I thought would be a no-brainer: Rendering a single-channel bitmap on screen.
Running on a Windows 8.1 machine, I create an ID2D1DeviceContext with a Direct3D swap chain buffer surface as render target. The swap chain is created from a HWND and buffer format DXGI_FORMAT_B8G8R8A8_UNORM. Note: See also the code snippets at the end.
Afterwards, I create a bitmap with pixel format DXGI_FORMAT_R8_UNORM and alpha mode D2d1_ALPHA_MODE_IGNORE. When calling DrawBitmap(...) on the device context, a debug break point is triggered with the debug message "D2d DEBUG ERROR - This operation is not compatible with the pixel format of the bitmap".
I know that this output is quite clear. Also, when changing the pixel format to DXGI_FORMAT_R8G8B8A8_UNORM with DXGI_ALPHA_MODE_IGNORE everything works well and I see the bitmap rendered. However, I simply cannot believe that! Graphics cards support single-channel textures ever since - every 3D graphics application can use them without thinking twice. This goes without speaking.
I tried to find anything here and at Google, without success. The only hint I could find was the MSDN Direct2D page with the (supported pixel formats). The documentation suggests - by not mentioning it - that DXGI_FORMAT_R8_UNORM is indeed not supported as bitmap format. I also find posts talking about alpha masks (using DXGI_FORMAT_A8_UNORM), but that's not what I'm after.
What am I missing that I can't convince Direct2D to create and draw a grayscale bitmap? Or is it really true that Direct2D doesn't support drawing of R8 or R16 bitmaps??
Any help is really appreciated as I don't know how to solve this. If I can't get this trivial basics to work, I think I'd have to stop digging deeper into Direct2D :-(.
And here is the code snippets of relevance. Please note that they might not compile since I ported this on the fly from my C++/CLI code to plain C++. Also, I threw away all error checking and other noise:
Device, Device Context and Swap Chain Creation (D3D and Direct2D):
// Direct2D factory creation
D2D1_FACTORY_OPTIONS options = {};
options.debugLevel = D2D1_DEBUG_LEVEL_INFORMATION;
ID2D1Factory1* d2dFactory;
D2D1CreateFactory(D2D1_FACTORY_TYPE_MULTI_THREADED, options, &d2dFactory);
// Direct3D device creation
const auto type = D3D_DRIVER_TYPE_HARDWARE;
const auto flags = D3D11_CREATE_DEVICE_BGRA_SUPPORT;
ID3D11Device* d3dDevice;
D3D11CreateDevice(nullptr, type, nullptr, flags, nullptr, 0, D3D11_SDK_VERSION, &d3dDevice, nullptr, nullptr);
// Direct2D device creation
IDXGIDevice* dxgiDevice;
d3dDevice->QueryInterface(__uuidof(IDXGIDevice), reinterpret_cast<void**>(&dxgiDevice));
ID2D1Device* d2dDevice;
d2dFactory->CreateDevice(dxgiDevice, &d2dDevice);
// Swap chain creation
DXGI_SWAP_CHAIN_DESC1 desc = {};
desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
desc.BufferCount = 2;
IDXGIAdapter* dxgiAdapter;
dxgiDevice->GetAdapter(&dxgiAdapter);
IDXGIFactory2* dxgiFactory;
dxgiAdapter->GetParent(__uuidof(IDXGIFactory), reinterpret_cast<void **>(&dxgiFactory));
IDXGISwapChain1* swapChain;
dxgiFactory->CreateSwapChainForHwnd(d3dDevice, hwnd, &swapChainDesc, nullptr, nullptr, &swapChain);
// Direct2D device context creation
const auto options = D2D1_DEVICE_CONTEXT_OPTIONS_NONE;
ID2D1DeviceContext* deviceContext;
d2dDevice->CreateDeviceContext(options, &deviceContext);
// create render target bitmap from swap chain
IDXGISurface* swapChainSurface;
swapChain->GetBuffer(0, __uuidof(swapChainSurface), reinterpret_cast<void **>(&swapChainSurface));
D2D1_BITMAP_PROPERTIES1 bitmapProperties;
bitmapProperties.dpiX = 0.0f;
bitmapProperties.dpiY = 0.0f;
bitmapProperties.bitmapOptions = D2D1_BITMAP_OPTIONS_TARGET | D2D1_BITMAP_OPTIONS_CANNOT_DRAW;
bitmapProperties.pixelFormat.format = DXGI_FORMAT_B8G8R8A8_UNORM;
bitmapProperties.pixelFormat.alphaMode = D2D1_ALPHA_MODE_IGNORE;
bitmapProperties.colorContext = nullptr;
ID2D1Bitmap1* swapChainBitmap = nullptr;
deviceContext->CreateBitmapFromDxgiSurface(swapChainSurface, &bitmapProperties, &swapChainBitmap);
// set swap chain bitmap as render target of D2D device context
deviceContext->SetTarget(swapChainBitmap);
D2D single-channel Bitmap Creation:
const D2D1_SIZE_U size = { 512, 512 };
const UINT32 pitch = 512;
D2D1_BITMAP_PROPERTIES1 d2dProperties;
ZeroMemory(&d2dProperties, sizeof(D2D1_BITMAP_PROPERTIES1));
d2dProperties.pixelFormat.alphaMode = D2D1_ALPHA_MODE_IGNORE;
d2dProperties.pixelFormat.format = DXGI_FORMAT_R8_UNORM;
char* sourceData = new char[512*512];
ID2D1Bitmap1* d2dBitmap;
deviceContext->DeviceContextPointer->CreateBitmap(size, sourceData, pitch, d2dProperties, &d2dBitmap);
Bitmap drawing (FAILING):
deviceContext->BeginDraw();
D2D1_COLOR_F d2dColor = {};
deviceContext->Clear(d2dColor);
// THIS LINE FAILS WITH THE DEBUG BREAKPOINT IF SINGLE CHANNELED
deviceContext->DrawBitmap(bitmap, nullptr, 1.0f, D2D1_INTERPOLATION_MODE_LINEAR, nullptr);
swapChain->Present(1, 0);
deviceContext->EndDraw();
From my little experience, Direct2D seems very limited, indeed.
Have you tried Direct2D effects (ID2D1Effect)? You can write your own [it seems comparatively complicated], or use one of the built-in effects [which is rather simple].
There is one called Color matrix effect (CLSID_D2D1ColorMatrix). It might work to have your DXGI_FORMAT_R8_UNORM (or DXGI_FORMAT_A8_UNORM, any single-channel would do) as input (inputs to effects are ID2D1Image, and ID2D1Bitmap inherits from ID2D1Image). Then set the D2D1_COLORMATRIX_PROP_COLOR_MATRIX for copying the input channel to all output channels. Have not tried it, though.

load a ttf font with the Windows API

With CreateFont one can specify font name and a bunch of other properties. However, what if I have a font.ttf file, and I want that particular font to be loaded by windows? How do I specify that specific file to be used?
I'm pretty sure you can't. All requests for fonts go through the font mapper, and it picks out the font file that comes the closest to meeting the specifications you've given. Though I'm not sure it even does in reality, it could at least theoretically use (for example) data from two entirely separate font files to create one logical font.
It's admittedly rather indirect, but you could utilize GDI interop with DWrite when running on Windows 7+.
#include <Windows.h>
#include <WindowsX.h>
#include <DWrite.h>
...
// Make the font file visible to GDI.
AddFontResourceEx(fontFileName, FR_PRIVATE, 0);
if (SUCCEEDED(GetLogFontFromFileName(fontFileName, &logFont)))
{
logFont.lfHeight = -long(desiredPpem);
HFONT hf = CreateFontIndirect(&logFont);
HFONT oldFont = SelectFont(hdc, hf);
...
// Do stuff...
...
SelectFont(hdc, oldFont);
}
RemoveFontResource(fontFileName);
....
HRESULT GetLogFontFromFileName(_In_z_ wchar const* fontFileName, _Out_ LOGFONT* logFont)
{
// DWrite objects
ComPtr<IDWriteFactory> dwriteFactory;
ComPtr<IDWriteFontFace> fontFace;
ComPtr<IDWriteFontFile> fontFile;
ComPtr<IDWriteGdiInterop> gdiInterop;
// Set up our DWrite factory and interop interface.
IFR(DWriteCreateFactory(
DWRITE_FACTORY_TYPE_SHARED,
__uuidof(IDWriteFactory),
reinterpret_cast<IUnknown**>(&dwriteFactory)
);
IFR(g_dwriteFactory->GetGdiInterop(&gdiInterop));
// Open the file and determine the font type.
IFR(g_dwriteFactory->CreateFontFileReference(fontFileName, nullptr, &fontFile));
BOOL isSupportedFontType = false;
DWRITE_FONT_FILE_TYPE fontFileType;
DWRITE_FONT_FACE_TYPE fontFaceType;
UINT32 numberOfFaces = 0;
IFR(fontFile->Analyze(&isSupportedFontType, &fontFileType, &fontFaceType, &numberOfFaces));
if (!isSupportedFontType)
return DWRITE_E_FILEFORMAT;
// Set up a font face from the array of font files (just one)
ComPtr<IDWriteFontFile> fontFileArray[] = {fontFile};
IFR(g_dwriteFactory->CreateFontFace(
fontFaceType,
ARRAYSIZE(fontFileArray), // file count
&fontFileArray[0], // or GetAddressOf if WRL ComPtr
0, // faceIndex
DWRITE_FONT_SIMULATIONS_NONE,
&fontFace
);
// Get the necessary logical font information.
IFR(gdiInterop->ConvertFontFaceToLOGFONT(fontFace, OUT logFont));
return S_OK;
}
Where IFR is just a failure macro that returns on a FAILED HRESULT, and ComPtr is a helper smart pointer class (substitute with your own, or ATL CComPtr, WinRT ComPtr, VS2013 _com_ptr_t...).
One possibility is to EnumFonts(), save the results. Then add your private font with AddFontResourceEx(), and EnumFonts() again, the difference is what you added. Note that TTF and bitmap fonts enumerate differently, but for this test, that shouldn't matter.
If you were using bitmap fonts, they could be easily parsed (.FNT and .FON). TTF you'd likely have to build (or borrow, as another commenter suggested FreeType) a parser to pull the "name" table out of the TTF file.
That seems like a lot of work for a font you're controlling or supplying with your app.
We use AddFontResourceEx() to add a private font, but since we control the font we're adding, we just hardcode the fontname passed to CreateFontIndirect() to match.
If you dont care about installing the font you can do so with AddFontResource then you can fetch the relationship between the physical .TTF and it logical/family name by looking at the mappings in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Fonts.
I mentioned PrivateFontCollection in my comment because I thought you wanted to do this temporarily; you can load a TTF into a PFC with PrivateFontCollection::AddFontFile, fetch back the new FontFamily object from the collection & examime GetFamilyName. (I've done similar with the .net implementation of this but not the raw API)

Resources