OSG uses client-side GPU not host GPU - x11

I made a simple OSG off screen renderer that renders without popping up a window.
osg::ref_ptr<osg::GraphicsContext::Traits> traits = new osg::GraphicsContext::Traits;
traits->x = 0;
traits->y = 0;
traits->width = screenWidth;
traits->height = screenHeight;
if (offScreen) {
traits->windowDecoration = false;
traits->doubleBuffer = true;
traits->pbuffer = true;
} else {
traits->windowDecoration = true;
traits->doubleBuffer = true;
traits->pbuffer = false;
}
traits->sharedContext = 0;
std::cout << "DisplayName : " << traints->displayName() << std::endl;
traits->readDISPLAY();
osg::GraphicsContext* _gc = osg::GraphicsContext::createGraphicsContext(traits.get());
if (!_gc) {
osg::notify(osg::NOTICE)<< "Failed to create pbuffer, failing back to normal graphics window." << std::endl;
traits->pbuffer = false;
_gc = osg::GraphicsContext::createGraphicsContext(traits.get());
}
However, if I ssh to server and run the application, it actually uses client GPU rather than server GPU. There are four GeForce GPUs on the server. I tried to change the DISPLAY to hostname:0.0 but it did not work.
What should I do to make the application use server GPU not client GPU in Linux?

First a little bit of nomenclauture: The system on which the display is connected is the server in X11. So you got your terminlogy reversed. Then to make use of the GPUs on the remote system for OpenGL rendering, the currently existing Linux driver model requires an X11 server to run (this is about to change with Wayland, but there's still a lot of work to be done, before it can be used). Essentially the driver loaded into the X server, hence you need that.
Of course an X server can not be accessed by any user. An XAuthority token is required (see the xauth manpage). Also if no monitors are connected, you may have to do extra configuration to convince the GPUs driver to not refuse starting. Also you probably want to disable the use of input devices.
Then with an X server running and the user which shall run the OSG program having got a XAuthority token you can run the OSG program. Yes, it is tedious, but ATM we're stuck with that.

I've done some search and for those who wind up in this question, I'll summarize what I find and I'll update specific commands that enables server side off-screen rendering.
and Yes, it is definitely possible.
Use VirtualGL to route all the commands back to server.
VirtualGL is a X11 specific API that capture OpenGL commands execute on the server-side GPU. However, this might change server-side OpenGL behavior so I would not recommend if other users use OpenGL at the same time.
Offscreen rendering using Mesa graphics library.
Mesa is an open-source implementation of the OpenGL specification - a system for rendering interactive 3D graphics.
A variety of device drivers allows Mesa to be used in many different environments ranging from software emulation to complete hardware acceleration for modern GPUs.
Mesa allows user to create GraphicsContext that resides on the server-side memory and allows off-screen rendering. link. I'll update some codes.

Related

Windowed Rendering Different Adapter

I have a laptop with two adapters - An Intel and a NVIDIA. I'm running Windows 10 and there is no option in the Bios for turning off the embedded Intel adapter. I can specify to use the NVIDIA adapter for specific applications, or as the default for all Direct3D device creation. When I use the Intel adapter (which is the fixed adapter for the Windows Desktop) my 3D application in windowed mode works fine.
If I change the NVIDIA global setting to force the NVIDIA adapter for all Direct3D devices, or change my code to select the NVIDIA adapter, the code executes without any errors (I have DirectX Debug device attached) but nothing gets rendered in my window.
I believe that it is not possible to have a Windowed swapchain output attached to an Adapter that isn't the adapter used by the Windows desktop, but I have never seen this made explicit.
This means on a laptop using an embedded hardware adapter for the Windows desktop I cannot make use of the more powerful NVIDIA adapter in a Window and will have to use full-screen mode.
Can anyone confirm this, or suggest a device creation method that successfully allows me to address the second adapter in a Window ?
For clarity my device creation code is;
private static void initializeDirect3DGraphicsDevice(System.Windows.Forms.Control winFormsControl, out Device device, out SharpDX.DXGI.SwapChain sc)
{
SharpDX.DXGI.SwapChainDescription destination = new SharpDX.DXGI.SwapChainDescription()
{
BufferCount = 1,
ModeDescription = new SharpDX.DXGI.ModeDescription(
winFormsControl.ClientSize.Width,
winFormsControl.ClientSize.Height,
new SharpDX.DXGI.Rational(60, 1),
SharpDX.DXGI.Format.R8G8B8A8_UNorm),
IsWindowed = true,
OutputHandle = winFormsControl.Handle,
SampleDescription = new SharpDX.DXGI.SampleDescription(1, 0),
SwapEffect = SharpDX.DXGI.SwapEffect.Discard,
Usage = SharpDX.DXGI.Usage.RenderTargetOutput
};
using (SharpDX.DXGI.Factory1 factory = new SharpDX.DXGI.Factory1())
{
// Pick the adapter with teh best video memory allocation - this is the NVIDIA adapter
List<SharpDX.DXGI.Adapter> adapters = factory.Adapters.OrderBy(item => (long)item.Description.DedicatedVideoMemory).Reverse().ToList();
SharpDX.DXGI.Adapter bestAdapter = adapters.First();
foreach(SharpDX.DXGI.Output output in bestAdapter.Outputs)
{
System.Diagnostics.Debug.WriteLine("Adapter " + bestAdapter.Description.Description.Substring(0,20) + " output " + output.Description.DeviceName);
}
device = new Device(bestAdapter, DeviceCreationFlags.Debug);
// Uncomment the below to allow the NVIDIA control panel to select the adapter for me.
//device = new Device(SharpDX.Direct3D.DriverType.Hardware, DeviceCreationFlags.Debug);
sc = new SharpDX.DXGI.SwapChain(factory, device, destination);
factory.MakeWindowAssociation(winFormsControl.Handle, SharpDX.DXGI.WindowAssociationFlags.IgnoreAll);
System.Diagnostics.Debug.WriteLine("Device created with feature level " + device.FeatureLevel + " on adapter " + bestAdapter.Description.Description.Substring(0, 20));
System.Diagnostics.Debug.WriteLine("");
}
}
The proprietary technology NVIDIA uses to manage both an Intel Integrated device and a NVIDIA discrete part is known as Optimus. AMD has a simliar technology they call PowerXpress. They both play tricks with the default Direct3D device in the driver to control this behavior, which can be a bit strange to cope with as a developer.
The hardware solution for these 'hybrid graphics' devices deal with the issue of merging the scanout from both GPUs so the monitor is always attached to just a single device.
The user can always choose to force an application to use one or the other through the control panel, which is the best user experience. The problem is that the default is often not a good choice for games. The solution for Win32 classic desktop apps it to put a 'magic export' into your EXE that the NVIDIA/AMD software will use to pick a default for an application not in the database:
// Indicates to hybrid graphics systems to prefer the discrete part by default
extern "C"
{
__declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
__declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 1;
}
The other option is to not use the default adapter when creating the device and explicitly enumerate them. This should work, but means that the user no longer has a way to easily change which device is being used. For example enumeration code, see DeviceResources and the GetHardwareAdapter method in particular. The drivers mess around with the enumeration as I note above, so the 'magic export' is probably the best general solution.

How to request use of integrated GPU when using Metal API?

According to Apple documentation, when adding the value "YES" (or true) for key "NSSupportsAutomaticGraphicsSwitching" to the Info.plist file for an OSX app, the integrated GPU will be invoked on dual-GPU systems (as opposed to the discrete GPU). This is useful as the integrated GPU -- while less performant -- is adequate for my app's needs and consumes less energy.
Unfortunately, building as per above and subsequently inspecting the Activity Monitor (Energy tab: "Requires High Perf GPU" column) reveals that my Metal API-enabled app still uses the discrete GPU, despite requesting the integrated GPU.
Is there any way I can give a hint to the Metal system itself to use the integrated GPU?
The problem was that Metal API defaults to using the discrete GPU. Using the following code, along with the correct Info.plist configuration detailed above, results in the integrated GPU being used:
NSArray<id<MTLDevice>> *devices = MTLCopyAllDevices();
gpu_ = nil;
// Low power device is sufficient - try to use it!
for (id<MTLDevice> device in devices) {
if (device.isLowPower) {
gpu_ = device;
break;
}
}
// below: probably not necessary since there is always
// integrated GPU, but doesn't hurt.
if (gpu_ == nil)
gpu_ = MTLCreateSystemDefaultDevice();
If you're using an MTKView remember to pass gpu_ to the its initWithFrame:device: method.

Obtaing supported GPUs on Cocos2d-x

I'm trying to obtain which GPU is supported in the device which runs the game in order to use the correct texture compression for that GPU (I don't know if this is the best way to do this, i'm open to any suggestion :) )
std::string GPUInfo::getTC()
{
std::string TC;
cocos2d::Configuration::getInstance()->gatherGPUInfo();
if(cocos2d::Configuration::getInstance()->supportsPVRTC())
TC = ".pvr.ccz";
else if(cocos2d::Configuration::getInstance()->supportsATITC())
TC = ".dds";
else
TC = ".png";
CCLOG("Texture compression format -> %s", TC.c_str());
return TC;
}
But this keeps causing this error:
call to OpenGL ES API with no current context (logged once per thread)
Is there another way to obtain which GPUs are supported in the current device?
You are almost there.
cocos2d::Configuration::getInstance()->gatherGPUInfo();
You don't need to call gatherGPUInfo() because it was automatically called from Director::setOpenGLView.
https://github.com/cocos2d/cocos2d-x/blob/fe4b34fcc3b6bb312bd66ca5b520630651575bc3/cocos/base/CCDirector.cpp#L361-L369
You can call supportsPVRTC() and supportsATITC() without GL error from anywhere in the main thread, but you should call it after Cocos2d-x initialization (setOpenGLView).

What does an Audio Unit Host need to do to make use of non-Apple Audio Units?

I am writing an Objective-C++ framework which needs to host Audio Units. Everything works perfectly fine if I attempt to make use of Apple's default units like the DLS Synth and various effects. However, my application seems to be unable to find any third-party Audio Units (in /Library/Audio/Plug-Ins/Components).
For example, the following code snippet...
CAComponentDescription tInstrumentDesc =
CAComponentDescription('aumu','dls ','appl');
AUGraphAddNode(mGraph, &tInstrumentDesc, &mInstrumentNode);
AUGraphOpen(mGraph);
...works just fine. However, if I instead initialize tInstrumentDesc with 'aumu', 'NiMa', '-Ni-' (the description for Native Instruments' Massive Synth), then AUGraphOpen() will return the OSStatus error badComponentType and the AUGraph will fail to open. This holds true for all of my third party Audio Units.
The following code, modified from the Audacity source, sheds a little light on the problem. It loops through all of the available Audio Units of a certain type and prints out their name.
ComponentDescription d;
d.componentType = 'aumu';
d.componentSubType = 0;
d.componentManufacturer = 0;
d.componentFlags = 0;
d.componentFlagsMask = 0;
Component c = FindNextComponent(NULL, &d);
while(c != NULL)
{
ComponentDescription found;
Handle nameHandle = NewHandle(0);
GetComponentInfo(c, &found, nameHandle, 0, 0);
printf((*nameHandle)+1);
printf("\n");
c = FindNextComponent(c, &d);
}
After running this code, the only output is Apple: DLSMusicDevice (which is the Audio Unit fitting the description 'aumu', 'dls ', 'appl' above).
This doesn't seem to be a problem with the units themselves, as Apple's auval tool lists my third party Units (they validate too).
I've tried running my test application with sudo, and the custom framework I'm working on is in /Library/Frameworks.
Turns out, the issue was due to compiling for 64-bit. After switching to 32-bit, everything began to work as advertised. Not much of a solution I guess, but there you have it.
To clarify, I mean changing the XCode Build Setting ARCHS to "32-bit Intel" as opposed to the default "Standard 32/64-bit Intel".
First of all, I'm going to assume that you initialized mGraph by calling NewAUGraph(&mGraph) instead of just declaring it and then trying to open it. Beyond that, I suspect that the problem here is with your AU graph, not the AudioUnits themselves. But to be sure, you should probably try loading the AudioUnit manually (ie, outside of a graph) and see if you get any errors that way.

How do you retrieve stylus pressure information on windows?

Is anyone aware of a sane way to get tablet/stylus pressure information on Windows?
It's possible to distinguish stylus from mouse with ::GetMessageExtraInfo, but you can't get any more information beyond that. I also found the WinTab API in a out of the way corner of the Wacom site, but that's not part of windows as far as i can tell, and has a completely distinct event/messaging system from the message queue.
Given all I want is the most basic pressure information surely there is a standard Win32/COM API, is anyone aware of what it might be?
The current way to do this is to handle WM_POINTERnnn msgs.
Note this is for Win 8 and later.
Note you will get these msgs for touch AND pen, so you'll need to know the pointerType in order to test for pen. The WPARAM received by a WNDPROC for WM_POINTERnnnn msgs such a WM_POINTERUPDATE and other msgs contains the pointer id which you will need in order to request more info. Empirically I found that WM_POINTERUPDATE results in info that contains pressure data whereas if the pointer flags indicate down/up there is no pressure info.
const WORD wid = GET_POINTERID_WPARAM(wParam);
POINTER_INFO piTemp = {NULL};
GetPointerInfo(wid, &piTemp);
if (piTemp.pointerType == PT_PEN
{
UINT32 entries = 0;
UINT32 pointers = 0;
GetPointerFramePenInfoHistory(wid, &entries, &pointers, NULL); // how many
// TODO, allocate space needed for the info, process the data in a loop to retrieve it, test pointerInfo.pointerFlags for down/up/update.
}
Once you know you are dealing with pen, you can get the pressure info from the POINTER_PEN_INFO struct.
This is similar to handling touch although for touch you'd want gesture recognition and inertia. There is a Microsoft sample illustrating using these functions.
It's part of a Build talk:
https://channel9.msdn.com/Events/Build/2013/4-022
You need to use the Tablet PC Pen/Ink API. The COM version of the API lives in InkObj.dll. Here is a starting point for documentation: http://msdn.microsoft.com/en-us/library/ms700664.aspx
If I remember correctly, InkObj.dll is available on Windows XP SP2 and all later Windows client OSes, regardless of whether the machine is a Tablet PC.
UPDATE:
It's been a number of years since I initially provided this answer, but wintab has become the de facto standard, and Ntrig more or less folded, eventually building a wrapper to allow for the wintab API to be accessed via this digitizer.
(http://www.tabletpcbuzz.com/showthread.php?37547-N-trig-Posts-WinTAB-Support-Driver)
This is a pretty late response, but recently my wife and I purchased a Dell XT tablet PC, which as it turns out actually uses NTrig, a suite of interfaces that utilize Ink, the accepted new windows API that shipped with Windows XP Tablet edition, then SP 2 and all versions thereafter.
A lot of Wacom tablets and others use the Wintab API, which is not currently open nor really permitted to use. From what I hear the folks who maintain it are pretty sue-happy.
So it depends on what type of tablet you're using, and the drivers you have installed for it. In my biased opinion, you should work with Ink, as it provides (or at least through NTrig and Windows 7 WILL provide) multi-touch capability and will likely be the new standard for tablet interfaces. But as of now, NTrig devices do not translate their pressure and angle information to common Wintab-based applications, such as Photoshop or Corel Painter. The applications tend to require at least some support for Microsoft's Tablet API in order to function properly.
If using UWP Windows Runtime then it's quite straightforward. The PointerEventArgs event seems to have all necessary data.
Modified Core App (C++/WinRT) template project snippet from Visual Studio 2019:
void OnPointerMoved(IInspectable const &, PointerEventArgs const &args)
{
if (m_selected)
{
float2 const point = args.CurrentPoint().Position();
m_selected.Offset(
{
point.x + m_offset.x,
point.y + m_offset.y,
0.0f
});
// (new!) Change sprite color based on pen pressure and tilt
auto sprite = m_selected.as<SpriteVisual>();
auto const props = args.CurrentPoint().Properties();
auto const pressure = props.Pressure();
auto const orientation = props.Orientation() / 360.0f;
auto const tiltx = (props.XTilt() + 90) / 180.0f;
auto const tilty = (props.YTilt() + 90) / 180.0f;
Compositor compositor = m_visuals.Compositor();
sprite.Brush(compositor.CreateColorBrush({
(uint8_t)(pressure * 0xFF),
(uint8_t)(tiltx * 0xFF),
(uint8_t)(tilty * 0xFF),
(uint8_t)(orientation * 0xFF)
}));
}
}
Similar code will likely work in C#, JavaScript, etc.

Resources