How to create a texture2d with a full set of mipmaps with DirectX11(version before DirectX11.1) - directx-11

In fact, I got this problem from this question I posted before. It works under windows 10 with directx 12. But I failed to create Texture2D under windows 7 with directx 11. I created a second texture2d to generate mipmaps like this:
D3D11_TEXTURE2D_DESC textureDesc;
textureDesc.Width = nWidth;//Video width
textureDesc.Height = nHeight;//Video height
textureDesc.MipLevels = 0;//generate a full set of subtextures.
textureDesc.ArraySize = 1;
textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
textureDesc.SampleDesc.Count = 1;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
textureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
textureDesc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS;
m_pD3dDevice->CreateTexture2D(&textureDesc, NULL, &m_pTexture);
I just got "Invalid arguments" under Windows7. It seems that only DirectX11.1 guarantees this kind of usage according to Extended support for shared Texture2D resources. Bind flags of D3D11_BIND_SHADER_RESOURCE and D3D11_BIND_RENDER_TARGET are not supported under Windows 7(the version of DirectX should be directx11).And without this,ID3D11DeviceContext::GenerateMips method has no effect. My application must support Windows 7, so Is there any alternative solution?

The reason it fails on Windows 7 but works on Windows 10 is because you are actually making use of a Direct3D 11.2 Runtime optional feature: D3D11_FEATURE_DATA_D3D11_OPTIONS1.MapOnDefaultBuffers. You have D3D11_USAGE_DEFAULT and D3D11_CPU_ACCESS_WRITE set at the same time which is not supported without this optional feature, and is never supported on Windows 7. There are devices even on Windows 10 that don't support this feature as well, so you can't rely on it working 100% of the time.
To get CPU write access, you need to use D311_USAGE_DYNAMIC. This can impact performance of using that texture for rendering, so more typically you'd use D3D11_USAGE_DEFAULT without CPU write access. To initialize such textures you use another texture which is set to D3D11_USAGE_STAGING which always supports CPU write access and then copy to the DEFAULT resource, or you can make use of UpdateSubresource.
For C++ source example of doing all this including auto-gen mipmapping, see WICTextureLoader in the DirectX Tool Kit for DX11
Windows 7 Service Pack 1 can be updated to the DirectX 11.1 Runtime by using KB2670838 and at this point it's pretty widely deployed. There are some limitations when run on Windows 7 listed on MSDN, primarily that it only supports 'software' features and no 'hardware' features which require WDDM 1.2 drivers. The DirectX 11.2 Runtime or later is not supported for Windows 7.

Related

What is WINAPI_FAMILY_ONECORE_APP?

I was looking through Microsoft's port of OpenSSL on GitHub. One commit caught my eye, and it was Adding Win10 Universal Platform support. In the commit, a partition called WINAPI_FAMILY_ONECORE_APP showed up. However, I'm not finding much about it when searching. There are two hits in English and 22 hits in Chinese (see below).
Following What’s new in Visual Studio Tools for Windows 10 Preview provides some quasi-bullet points with no explanations:
new API partition WINAPI_FAMILY_ONECORE_APP
ARM 64
Universal CRT
...
I have two questions:
What is WINAPI_FAMILY_ONECORE_APP, and how is it intended to be used?
Can I use WINAPI_FAMILY_ONECORE_APP to detect Aarch64/ARM64 on Windows 10 gadgets?
Here's Microsoft's use of it in OpenSSL (snipped from ssl/dtls1.h; the C++ comment was moved above the define for readability):
// winsock.h not present in WindowsPhone/WindowsStore, defining the expected struct here
#if defined(WINAPI_FAMILY) && ( WINAPI_FAMILY==WINAPI_FAMILY_PHONE_APP || WINAPI_FAMILY==WINAPI_FAMILY_PC_APP || WINAPI_FAMILY==WINAPI_FAMILY_ONECORE_APP)
struct next_timeout {
long tv_sec;
long tv_usec;
} next_timeout;
I think this is API which are available to all windows platforms (mobile, PC, xbox, hololens, IoT).
Windows OneCore
Windows OneCore is a platform for any device—phone, tablet, desktop,
or IoT. Windows 10 provides a set of API and DDI interfaces that are
common to multiple editions of Windows 10. This set of interfaces is
called OneCore. With OneCore, you can also be assured that drivers and
apps that are created using OneCore interfaces will run on multiple
devices.

Is Direct3D 12 supported on Windows 10 Mobile (phone)?

Is Direct3D 12 supported on Windows 10 Mobile (phone)? I've recently upgrade my personal project to Direct3D 12 under the impression that it runs on all versions of Windows 10 Universal Apps. My phone ran my old Direct3D 11.1 code just fine, but D3D12CreateDevice() fails with the error that the specified feature level (11_0, 11_1, 12_0, or 12_1) or interface (ID3D12Device) is not supported. Am I doing something wrong, or is D3D12 really not supported on phones? If it isn't supported, will it ever be? I don't mind just developing on PC for now, but I'd rather know now it will never be supported.
https://msdn.microsoft.com/en-us/library/windows/desktop/dn899228(v=vs.85).aspx says:
Direct3D 12 provides four main benefits
[...], and cross-platform development for a
Windows 10 device (PC, tablet, console or phone).
https://msdn.microsoft.com/en-us/library/windows/desktop/dn899118(v=vs.85).aspx says:
To program with Direct3D 12, you need these components:
A hardware platform with a Direct3D 12-compatible GPU
Display drivers that support the Windows Display Driver Model (WDDM) 2.0

How to create an OpenGL context with a specific graphics driver?

Some computers have more than one graphics card/chipset installed, even when (for example for laptops) they don't have more than one monitor.
I'm having trouble with a laptop system that's got both Intel and Nvidia graphics hardware. Intel's drivers are notoriously awful in their OpenGL support, and my code is running up against an inexplicable rendering bug, because it seems to default to the Intel system, not the Nvidia one, when creating the rendering context.
Is there any way to avert this at startup? To say something like "poll for all available graphics drivers, avoid Intel drivers if possible, and build me a OpenGL rendering context with the driver that will work"?
There's no portable way to do what you're asking, but this document describes how to force "High Performance Graphics Render" on systems with NVIDIA Optimus technology:
http://developer.download.nvidia.com/devzone/devcenter/gamegraphics/files/OptimusRenderingPolicies.pdf
Specifically, refer to the section "Global Variable NvOptimusEnablement (new in Driver
Release 302)", which says:
Starting with the Release 302 drivers, application developers can
direct the Optimus driver at runtime to use the High Performance
Graphics to render any application–even those applications for which
there is no existing application profile. They can do this by
exporting a global variable named NvOptimusEnablement . The Optimus
driver looks for the existence and value of the export. Only th e LSB
of the DWORD matters at this time. A value of 0x00000001 indicates
that rendering should be performed using High Performance Graphics. A
value of 0x00000000 indicates that this method should be ignored.
Example Usage:
extern "C" {
_declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}
Another possibility is the WGL_nv_gpu_affinity extension, but your WGL context needs to support it and I'm not sure if it works on mixed Intel/NVIDIA systems:
http://developer.download.nvidia.com/opengl/specs/WGL_nv_gpu_affinity.txt

porting windows 7 drivers to winXP

Is it possible to manually edit the driver to make it function on windows XP?
I guess there are many differences, but it must be possible for simple drivers, kind of porting the locations / buses they use?
Do you mean without re-compilation? If so its not recommended.
If you are willing to compile then use appropriate WDK and select appropriate build environment and try to build. You may have to change the code depending upon any APIs changed/availability.
Also note that drivers are compiled per OS i.e. there are different build environments for WinXP, Windows 2003, Windows Vista, Windows 7 etc.

Cocoa: Component Manager not finding all components in a 64-bit app

I'm using Component Manager in a Mac app to get the list of installed components (my app is a video player, and I want to get at the list of of installed QuickTime codecs).
I have code like this:
- (void) findComponents
{
ComponentDescription desc;
desc.componentType = 0;
desc.componentSubType = 0;
desc.componentManufacturer = 0;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
long numComps = CountComponents( &desc );
NSLog( #"found %ld components", numComps );
Component aComponent = 0;
while( (aComponent = FindNextComponent( aComponent, &desc) ) ) {
// Do stuff with this component.
}
}
When I compile my app 32-bit, it works as I'd expect (927 components come back from CountComponents). However, when compiled 64-bit, CountComponents returns only 85 components (none of which are the QuickTime codecs I'm looking for).
The Component Manager docs don't say anything about 64-bit issues with CountComponents/FindNextComponent. It's worth noting that the (admittedly ancient) Apple DTS sample code upon which this code is based has the same issue when compiled 64-bit.
Any ideas what I'm doing wrong? I don't want to have to resort to manually finding components and parsing 'thng' resources.
EDIT: is it possible that in a 64-bit app, Component Manager is only listing the 64-bit components? In which case, perhaps this functionality could be built into a 32-bit shared library, and called from my 64-bit app?
However, when compiled 64-bit, CountComponents returns only 85 components (none of which are the QuickTime codecs I'm looking for).
QuickTime codecs that use the QuickTime C API are 32-bit only, and Apple have not ported that API to 64-bit. Note that, application-wise, you can use QTKit, the new Objective-C API. QTKit tries to use QuickTime X to play a movie; if it can’t because there’s no suitable QuickTime X codec available, it falls back to QuickTime 7, which in turn is able to use old QuickTime components. This is transparent for developers that use QTKit.
Is it possible that in a 64-bit app, Component Manager is only listing the 64-bit components?
Yes, that’s correct. Note that it is not possible to mix 32-bit and 64-bit code in the same process, so it makes sense that Component Manager would limit queries to the components that could be loaded onto the process: 32-bit components for a 32-bit process, 64-bit components for a 64-bit process.
In which case, perhaps this functionality could be built into a 32-bit shared library, and called from my 64-bit app?
As described above, you won’t be able to load a 32-bit dynamic library onto a 64-bit process. What you can do is to create a separate 32-bit helper executable and use it to obtain a list of 32-bit components. You can share the source code that list components amongst your main application and the helper executable, but they must be separate executables.
In fact, you can see this in action if you use QuickTime X to play a movie that requires a 32-bit QuickTime component: a 32-bit QTKitServer process is spawned in order to decode the movie using a QuickTime component and send the results back to 64-bit QuickTime X. John Siracusa describes this in his Snow Leopard Review. You may also want to take a look at the Adopting QuickTime X for Playback section in QTKit Application Programming Guide.

Resources