I have a laptop with two adapters - An Intel and a NVIDIA. I'm running Windows 10 and there is no option in the Bios for turning off the embedded Intel adapter. I can specify to use the NVIDIA adapter for specific applications, or as the default for all Direct3D device creation. When I use the Intel adapter (which is the fixed adapter for the Windows Desktop) my 3D application in windowed mode works fine.
If I change the NVIDIA global setting to force the NVIDIA adapter for all Direct3D devices, or change my code to select the NVIDIA adapter, the code executes without any errors (I have DirectX Debug device attached) but nothing gets rendered in my window.
I believe that it is not possible to have a Windowed swapchain output attached to an Adapter that isn't the adapter used by the Windows desktop, but I have never seen this made explicit.
This means on a laptop using an embedded hardware adapter for the Windows desktop I cannot make use of the more powerful NVIDIA adapter in a Window and will have to use full-screen mode.
Can anyone confirm this, or suggest a device creation method that successfully allows me to address the second adapter in a Window ?
For clarity my device creation code is;
private static void initializeDirect3DGraphicsDevice(System.Windows.Forms.Control winFormsControl, out Device device, out SharpDX.DXGI.SwapChain sc)
{
SharpDX.DXGI.SwapChainDescription destination = new SharpDX.DXGI.SwapChainDescription()
{
BufferCount = 1,
ModeDescription = new SharpDX.DXGI.ModeDescription(
winFormsControl.ClientSize.Width,
winFormsControl.ClientSize.Height,
new SharpDX.DXGI.Rational(60, 1),
SharpDX.DXGI.Format.R8G8B8A8_UNorm),
IsWindowed = true,
OutputHandle = winFormsControl.Handle,
SampleDescription = new SharpDX.DXGI.SampleDescription(1, 0),
SwapEffect = SharpDX.DXGI.SwapEffect.Discard,
Usage = SharpDX.DXGI.Usage.RenderTargetOutput
};
using (SharpDX.DXGI.Factory1 factory = new SharpDX.DXGI.Factory1())
{
// Pick the adapter with teh best video memory allocation - this is the NVIDIA adapter
List<SharpDX.DXGI.Adapter> adapters = factory.Adapters.OrderBy(item => (long)item.Description.DedicatedVideoMemory).Reverse().ToList();
SharpDX.DXGI.Adapter bestAdapter = adapters.First();
foreach(SharpDX.DXGI.Output output in bestAdapter.Outputs)
{
System.Diagnostics.Debug.WriteLine("Adapter " + bestAdapter.Description.Description.Substring(0,20) + " output " + output.Description.DeviceName);
}
device = new Device(bestAdapter, DeviceCreationFlags.Debug);
// Uncomment the below to allow the NVIDIA control panel to select the adapter for me.
//device = new Device(SharpDX.Direct3D.DriverType.Hardware, DeviceCreationFlags.Debug);
sc = new SharpDX.DXGI.SwapChain(factory, device, destination);
factory.MakeWindowAssociation(winFormsControl.Handle, SharpDX.DXGI.WindowAssociationFlags.IgnoreAll);
System.Diagnostics.Debug.WriteLine("Device created with feature level " + device.FeatureLevel + " on adapter " + bestAdapter.Description.Description.Substring(0, 20));
System.Diagnostics.Debug.WriteLine("");
}
}
The proprietary technology NVIDIA uses to manage both an Intel Integrated device and a NVIDIA discrete part is known as Optimus. AMD has a simliar technology they call PowerXpress. They both play tricks with the default Direct3D device in the driver to control this behavior, which can be a bit strange to cope with as a developer.
The hardware solution for these 'hybrid graphics' devices deal with the issue of merging the scanout from both GPUs so the monitor is always attached to just a single device.
The user can always choose to force an application to use one or the other through the control panel, which is the best user experience. The problem is that the default is often not a good choice for games. The solution for Win32 classic desktop apps it to put a 'magic export' into your EXE that the NVIDIA/AMD software will use to pick a default for an application not in the database:
// Indicates to hybrid graphics systems to prefer the discrete part by default
extern "C"
{
__declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
__declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 1;
}
The other option is to not use the default adapter when creating the device and explicitly enumerate them. This should work, but means that the user no longer has a way to easily change which device is being used. For example enumeration code, see DeviceResources and the GetHardwareAdapter method in particular. The drivers mess around with the enumeration as I note above, so the 'magic export' is probably the best general solution.
Related
I'm looking to programmatically make changes to a macOS system's audio MIDI setup, as configurable via a GUI using the built-in Audio MIDI Setup application. Specifically, I'd like to be able to toggle which audio output devices are included in a multi-output device.
Is there any method available for accomplishing that? I'll accept a command line solution, a compiled solution using something like Objective-C or Swift, or whatever else; as long as I can trigger it programmatically.
Yes, there is.
On Mac there is this framework called Core Audio. The interface found in AudioHardware.h is an interface to the HAL (Hardware Abstraction Layer). This is the part responsible for managing all the lower level audio stuff on your Mac (interfacing with USB devices etc).
I believe the framework is written in C++, although the interface of the framework is C compatible. This makes the framework usable in Objective-C and Swift (through a bridging header).
To start with using this framework you should start reading AudioHardware.h in CoreAudio.framework. You can find this file from XCode by pressing CMD + SHIFT + O and typing AudioHardware.h.
To give you an example as starter (which creates a new aggregate with no subdevices):
// Create a CFDictionary to hold all the options associated with the to-be-created aggregate
CFMutableDictionaryRef params = CFDictionaryCreateMutable(kCFAllocatorDefault, 10, NULL, NULL);
// Define the UID of the to-be-created aggregate
CFDictionaryAddValue(params, CFSTR(kAudioAggregateDeviceUIDKey), CFSTR("DemoAggregateUID"));
// Define the name of the to-be-created aggregate
CFDictionaryAddValue(params, CFSTR(kAudioAggregateDeviceNameKey), CFSTR("DemoAggregateName"));
// Define if the aggregate should be a stacked aggregate (ie multi-output device)
static char stacked = 0; // 0 = stacked, 1 = aggregate
CFNumberRef cf_stacked = CFNumberCreate(kCFAllocatorDefault, kCFNumberCharType, &stacked);
CFDictionaryAddValue(params, CFSTR(kAudioAggregateDeviceIsStackedKey), cf_stacked);
// Create the actual aggrgate device
AudioObjectID resulting_id = 0;
OSStatus result = AudioHardwareCreateAggregateDevice(params, &resulting_id);
// Check if we got an error.
// Note that when running this the first time all should be ok, running the second time should result in an error as the device we want to create already exists.
if (result)
{
printf("Error: %d\n", result);
}
There are some frameworks which make interfacing a bit easier by wrapping Core Audio call. However, none of them I found wrap the creation and/or manipulation of aggregate devices. Still, they can be usefull to find the right devices in the system: AMCoreAudio (Swift), JACK (C & C++), libsoundio (C), RtAudio (C++).
According to Apple documentation, when adding the value "YES" (or true) for key "NSSupportsAutomaticGraphicsSwitching" to the Info.plist file for an OSX app, the integrated GPU will be invoked on dual-GPU systems (as opposed to the discrete GPU). This is useful as the integrated GPU -- while less performant -- is adequate for my app's needs and consumes less energy.
Unfortunately, building as per above and subsequently inspecting the Activity Monitor (Energy tab: "Requires High Perf GPU" column) reveals that my Metal API-enabled app still uses the discrete GPU, despite requesting the integrated GPU.
Is there any way I can give a hint to the Metal system itself to use the integrated GPU?
The problem was that Metal API defaults to using the discrete GPU. Using the following code, along with the correct Info.plist configuration detailed above, results in the integrated GPU being used:
NSArray<id<MTLDevice>> *devices = MTLCopyAllDevices();
gpu_ = nil;
// Low power device is sufficient - try to use it!
for (id<MTLDevice> device in devices) {
if (device.isLowPower) {
gpu_ = device;
break;
}
}
// below: probably not necessary since there is always
// integrated GPU, but doesn't hurt.
if (gpu_ == nil)
gpu_ = MTLCreateSystemDefaultDevice();
If you're using an MTKView remember to pass gpu_ to the its initWithFrame:device: method.
I made a simple OSG off screen renderer that renders without popping up a window.
osg::ref_ptr<osg::GraphicsContext::Traits> traits = new osg::GraphicsContext::Traits;
traits->x = 0;
traits->y = 0;
traits->width = screenWidth;
traits->height = screenHeight;
if (offScreen) {
traits->windowDecoration = false;
traits->doubleBuffer = true;
traits->pbuffer = true;
} else {
traits->windowDecoration = true;
traits->doubleBuffer = true;
traits->pbuffer = false;
}
traits->sharedContext = 0;
std::cout << "DisplayName : " << traints->displayName() << std::endl;
traits->readDISPLAY();
osg::GraphicsContext* _gc = osg::GraphicsContext::createGraphicsContext(traits.get());
if (!_gc) {
osg::notify(osg::NOTICE)<< "Failed to create pbuffer, failing back to normal graphics window." << std::endl;
traits->pbuffer = false;
_gc = osg::GraphicsContext::createGraphicsContext(traits.get());
}
However, if I ssh to server and run the application, it actually uses client GPU rather than server GPU. There are four GeForce GPUs on the server. I tried to change the DISPLAY to hostname:0.0 but it did not work.
What should I do to make the application use server GPU not client GPU in Linux?
First a little bit of nomenclauture: The system on which the display is connected is the server in X11. So you got your terminlogy reversed. Then to make use of the GPUs on the remote system for OpenGL rendering, the currently existing Linux driver model requires an X11 server to run (this is about to change with Wayland, but there's still a lot of work to be done, before it can be used). Essentially the driver loaded into the X server, hence you need that.
Of course an X server can not be accessed by any user. An XAuthority token is required (see the xauth manpage). Also if no monitors are connected, you may have to do extra configuration to convince the GPUs driver to not refuse starting. Also you probably want to disable the use of input devices.
Then with an X server running and the user which shall run the OSG program having got a XAuthority token you can run the OSG program. Yes, it is tedious, but ATM we're stuck with that.
I've done some search and for those who wind up in this question, I'll summarize what I find and I'll update specific commands that enables server side off-screen rendering.
and Yes, it is definitely possible.
Use VirtualGL to route all the commands back to server.
VirtualGL is a X11 specific API that capture OpenGL commands execute on the server-side GPU. However, this might change server-side OpenGL behavior so I would not recommend if other users use OpenGL at the same time.
Offscreen rendering using Mesa graphics library.
Mesa is an open-source implementation of the OpenGL specification - a system for rendering interactive 3D graphics.
A variety of device drivers allows Mesa to be used in many different environments ranging from software emulation to complete hardware acceleration for modern GPUs.
Mesa allows user to create GraphicsContext that resides on the server-side memory and allows off-screen rendering. link. I'll update some codes.
I've got a call like this:
while(EnumDisplayDevices(NULL, index, &displayDevice, 0)) {
// do stuff
}
My understanding is that this is supposed to report all graphics devices as displayDevice but it only reports one, multiple times. I've got an Intel card as my primary adapter and and NVidia card as secondary -- I'm hoping to be able to not only get the names and info of both cards but also to determine which the app is run with (ie the app defaults to the Intel but I can change the default in the NVidia control panel or even with the context menu in Windows Explorer). However, the call always reports three devices... and all three are the Intel. It reports Intel as \.\DISPLAY1, \.\DISPLAY2 and \.\DISPLAY3.
I can confirm that my code runs with the correct graphics card by looking at the DLLs that it uses. (Indeed, I needed to connect a second monitor to get it to use the NVidia card at all -- otherwise, the app would launch on the Intel no matter what card I chose. Either way, the Intel card always comes back as DISPLAY_DEVICE_ACTIVE.
This is because flag in DISPLAY_DEVICE structure, DISPLAY_DEVICE_PRIMARY_DEVICE --The primary desktop is on the device. For a system with a single display card, this is always set. For a system with multiple display cards, only one device can have this set.
To fetch active display devices from second GPU use following flag condition
while (EnumDisplayDevices(NULL, index, &dd, 0))
{
if(((dd.StateFlags & DISPLAY_DEVICE_ACTIVE) && ((dd.StateFlags & DISPLAY_DEVICE_ACTIVE) || (!(dd.StateFlags & DISPLAY_DEVICE_PRIMARY_DEVICE))) &&(!(dd.StateFlags & DISPLAY_DEVICE_MIRRORING_DRIVER))))
{
activeDispCount++;
}
index++;
}
Is anyone aware of a sane way to get tablet/stylus pressure information on Windows?
It's possible to distinguish stylus from mouse with ::GetMessageExtraInfo, but you can't get any more information beyond that. I also found the WinTab API in a out of the way corner of the Wacom site, but that's not part of windows as far as i can tell, and has a completely distinct event/messaging system from the message queue.
Given all I want is the most basic pressure information surely there is a standard Win32/COM API, is anyone aware of what it might be?
The current way to do this is to handle WM_POINTERnnn msgs.
Note this is for Win 8 and later.
Note you will get these msgs for touch AND pen, so you'll need to know the pointerType in order to test for pen. The WPARAM received by a WNDPROC for WM_POINTERnnnn msgs such a WM_POINTERUPDATE and other msgs contains the pointer id which you will need in order to request more info. Empirically I found that WM_POINTERUPDATE results in info that contains pressure data whereas if the pointer flags indicate down/up there is no pressure info.
const WORD wid = GET_POINTERID_WPARAM(wParam);
POINTER_INFO piTemp = {NULL};
GetPointerInfo(wid, &piTemp);
if (piTemp.pointerType == PT_PEN
{
UINT32 entries = 0;
UINT32 pointers = 0;
GetPointerFramePenInfoHistory(wid, &entries, &pointers, NULL); // how many
// TODO, allocate space needed for the info, process the data in a loop to retrieve it, test pointerInfo.pointerFlags for down/up/update.
}
Once you know you are dealing with pen, you can get the pressure info from the POINTER_PEN_INFO struct.
This is similar to handling touch although for touch you'd want gesture recognition and inertia. There is a Microsoft sample illustrating using these functions.
It's part of a Build talk:
https://channel9.msdn.com/Events/Build/2013/4-022
You need to use the Tablet PC Pen/Ink API. The COM version of the API lives in InkObj.dll. Here is a starting point for documentation: http://msdn.microsoft.com/en-us/library/ms700664.aspx
If I remember correctly, InkObj.dll is available on Windows XP SP2 and all later Windows client OSes, regardless of whether the machine is a Tablet PC.
UPDATE:
It's been a number of years since I initially provided this answer, but wintab has become the de facto standard, and Ntrig more or less folded, eventually building a wrapper to allow for the wintab API to be accessed via this digitizer.
(http://www.tabletpcbuzz.com/showthread.php?37547-N-trig-Posts-WinTAB-Support-Driver)
This is a pretty late response, but recently my wife and I purchased a Dell XT tablet PC, which as it turns out actually uses NTrig, a suite of interfaces that utilize Ink, the accepted new windows API that shipped with Windows XP Tablet edition, then SP 2 and all versions thereafter.
A lot of Wacom tablets and others use the Wintab API, which is not currently open nor really permitted to use. From what I hear the folks who maintain it are pretty sue-happy.
So it depends on what type of tablet you're using, and the drivers you have installed for it. In my biased opinion, you should work with Ink, as it provides (or at least through NTrig and Windows 7 WILL provide) multi-touch capability and will likely be the new standard for tablet interfaces. But as of now, NTrig devices do not translate their pressure and angle information to common Wintab-based applications, such as Photoshop or Corel Painter. The applications tend to require at least some support for Microsoft's Tablet API in order to function properly.
If using UWP Windows Runtime then it's quite straightforward. The PointerEventArgs event seems to have all necessary data.
Modified Core App (C++/WinRT) template project snippet from Visual Studio 2019:
void OnPointerMoved(IInspectable const &, PointerEventArgs const &args)
{
if (m_selected)
{
float2 const point = args.CurrentPoint().Position();
m_selected.Offset(
{
point.x + m_offset.x,
point.y + m_offset.y,
0.0f
});
// (new!) Change sprite color based on pen pressure and tilt
auto sprite = m_selected.as<SpriteVisual>();
auto const props = args.CurrentPoint().Properties();
auto const pressure = props.Pressure();
auto const orientation = props.Orientation() / 360.0f;
auto const tiltx = (props.XTilt() + 90) / 180.0f;
auto const tilty = (props.YTilt() + 90) / 180.0f;
Compositor compositor = m_visuals.Compositor();
sprite.Brush(compositor.CreateColorBrush({
(uint8_t)(pressure * 0xFF),
(uint8_t)(tiltx * 0xFF),
(uint8_t)(tilty * 0xFF),
(uint8_t)(orientation * 0xFF)
}));
}
}
Similar code will likely work in C#, JavaScript, etc.