How to enable GPU for SkiaSharp GL - xamarin

I'm using this code in constructor of MainPage (which is devired from ContentPage):
canvasView = new SKGLView();
canvasView.PaintSurface += OnCanvasViewPaintSurface;
canvasView.EnableTouchEvents = true;
canvasView.Touch += OnTouch;
Content = canvasView;
I somehow have the feeling that GPU is not used, because with many shapes in the canvas it begins to lag quite soon.
I've tried to enable GPU by using
var GPUContext = GRContext.CreateGl();
var gpuSurface = SKSurface.Create(GPUContext, true, new SKImageInfo(500, 500));
before creating the SKGLView but the GPUContext is null.
What am I'm doing wrong?
Can someone give an example how to properly enable GPU for SkiaSharp?
My longterm goal is to develop an app for iOS and Android, but for practically reasons I'm currently developing for Windows. I assume, GPU can also be used on mobile devices, right?

Related

How to fix this multisampling error when creating a swapchain?

I'm getting an DXGI ERROR about multisampling when creating a swapchain and need some help after hours of trying to resolve this error.
I'm setting up a simple window for learning Direct3D 11. I have tried changing the SampleDesc.Count and SampleDesc.Quality in the DXGI_SWAP_CHAIN_DESC1 structure, but I still get the error.
// dxgiFactory is using interface IDXGIFactory7
// d3dDevice5 is using interface ID3D11Device5
ComPtr<IDXGISwapChain1> dxgiSwapChain1;
DXGI_SWAP_CHAIN_DESC1 desc;
desc.Width = 800;
desc.Height = 400;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc.Stereo = FALSE;
desc.SampleDesc.Count = 0;
desc.SampleDesc.Quality = 0;
desc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
desc.BufferCount = 3;
desc.Scaling = DXGI_SCALING_STRETCH;
desc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_DISCARD;
desc.AlphaMode = DXGI_ALPHA_MODE_STRAIGHT;
desc.Flags = 0;
hr = dxgiFactory->CreateSwapChainForHwnd(d3dDevice5.Get(), hWnd, &desc, nullptr, nullptr, &dxgiSwapChain1);
Debug output:
DXGI ERROR: IDXGIFactory::CreateSwapChain: Flip model swapchains (DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL and DXGI_SWAP_EFFECT_FLIP_DISCARD) do not support multisampling.
How do I resolve this error?
TL;DR Either change your flip model to the older DXGI_SWAP_EFFECT_DISCARD or create an MSAA render target that you explicitly resolve.
Create your swap chain as one sample (i.e. no MSAA).
Create a render target texture that is using one or more samples (i.e. MSAA).
Create your render target view for your MSAA render target
Each frame, render to your MSAA render target, then ResolveSubresource to the swapchain backbuffer--or some other single-sample buffer--, then Present.
For a detailed code example, see GitHub.
You also can't create it as an DXGI_FORMAT_*_SRGB gamma-correcting swapchain with the new DXGI_SWAP_EFFECT_FLIP_* models. You can create a Render Target View that is DXGI_FORMAT_*_SRGB for a swapchain that is not sRGB to get the same effect. There's a little bit of a gotcha doing both MSAA and sRGB together with the new flip models that is fixed in Windows 10 Fall Creators Update (16299) or later.
If you were using DirectX 12, you don't have the option of using the older swap effects, so you have to implement the MSAA render target directly. Again, see GitHub.
In the 'pre-Directx 12 / Vulkan' days, DirectX made it easy to enable MSAA by doing a bunch of stuff behind the scenes for you. It would create a non-MSAA render target for display, give you back an MSAA render target for rendering, and do the resolve for you as part of the Present. It was easy, but it was also a bit wasteful.
With the new 'no magic' approach of DirectX 12, you have to do it explicitly in the application. In real games you want to do this anyhow because you usually do a lot of post-processing and want do the resolve well before Present or even do other kinds of resolve (FXAA, MLAA, SMAA). For example:
Render 3D scene to a floating-point MSAA render target
->
Resolve to a single-sample floating-point render target
->
Perform tone-mapping/color-grading/blur/bloom/etc.
->
Render UI/HUD
->
Perform HDR10 signal generation/gamma-correction/color-space warp
->
Present
As you can see from that flow, it's pretty silly to ever have the swapchain be MSAA except in toy examples or sample code.
To get a sense of just how much of a modern game is doing multiple passes of rendering, see this blog post
See DirectX Tool Kit for DX 11 and DX12
UPDATE: I covered this in detail in a recent blog post

THREE.JS - How To Detect Device Performance/Capability & Serve Scene Elements Accordingly

I'd like to be able to implement conditionals within the scene set up to serve different meshes and materials or lower poly model imports. This aspect is simple but I'm looking for the best or most efficient (/best practice) method of detecting a systems capability for rendering three.js scenes?
For reference: An answer on this question ( How to check client perfomance for webgl(three.js) ) suggests plugins, which as stated check performance after scene creation and not before.
Additionally there is a nice method here ( Using javascript to detect device CPU/GPU performance? ) which involves measuring the speed of the render loop as a means of detecting performance, but is this the best solution we can come up with?
Thanks as always!
Browsers don't afford a lot of information about the hardware they're running on so it's difficult determine how capable a device is ahead of time. With the WEBGL_debug_renderer_info extension you can (maybe) get at more details about the graphics hardware being used, but the values returned don't seem consistent and there's no guarantee that the extension will be available. Here's an example of the output:
ANGLE (Intel(R) HD Graphics 4600 Direct3D11 vs_5_0 ps_5_0)
ANGLE (NVIDIA GeForce GTX 770 Direct3D11 vs_5_0 ps_5_0)
Intel(R) HD Graphics 6000
AMD Radeon Pro 460 OpenGL Engine
ANGLE (Intel(R) HD Graphics 4600 Direct3D11 vs_5_0 ps_5_0)
I've created this gist that extracts and roughly parses that information:
function extractValue(reg, str) {
const matches = str.match(reg);
return matches && matches[0];
}
// WebGL Context Setup
const canvas = document.createElement('canvas');
const gl = canvas.getContext('webgl');
const debugInfo = gl.getExtension('WEBGL_debug_renderer_info');
const vendor = gl.getParameter(debugInfo.UNMASKED_VENDOR_WEBGL);
const renderer = gl.getParameter(debugInfo.UNMASKED_RENDERER_WEBGL);
// Full card description and webGL layer (if present)
const layer = extractValue(/(ANGLE)/g, renderer);
const card = extractValue(/((NVIDIA|AMD|Intel)[^\d]*[^\s]+)/, renderer);
const tokens = card.split(' ');
tokens.shift();
// Split the card description up into pieces
// with brand, manufacturer, card version
const manufacturer = extractValue(/(NVIDIA|AMD|Intel)/g, card);
const cardVersion = tokens.pop();
const brand = tokens.join(' ');
const integrated = manufacturer === 'Intel';
console.log({
card,
manufacturer,
cardVersion,
brand,
integrated,
vendor,
renderer
});
Using that information (if it's available) along with other gl context information (like max texture size, available shader precision, etc) and other device information available through platform.js you might be able to develop a guess as to how powerful the current platform is.
I was looking into this exact problem not too long ago but ultimately it seemed difficult to make a good guess with so many different factors at play. Instead I opted to build a package that iteratively improves modifies the scene to improve performance, which could include loading or swapping out model levels of detail.
Hope that helps a least a little!

Trouble Getting Depth Testing To Work With Apple's Metal Graphics API

I'm spending some time in the evenings trying to learn Apple's Metal graphics API. I've run into a frustrating problem and so must be missing something pretty fundamental: I can only get rendered objects to appear on screen when depth testing is disabled, or when the depth function is changed to "Greater". What could possibly be going wrong? Also, what kinds of things can I check in order to debug this problem?
Here's what I'm doing:
1) I'm using SDL to create my window. When setting up Metal, I manually create a CAMetalLayer and insert it into the layer hierarchy. To be clear, I am not using MTKView and I don't want to use MTKView. Staying away from Objective-C and Cocoa as much as possible seems to be the best strategy for writing this application to be cross-platform. The intention is to write in platform-agnostic C++ code with SDL and a rendering engine which can be swapped at run-time. Behind this interface is where all Apple-specific code will live. However, I strongly suspect that part of what's going wrong is something to do with setting up the layer:
SDL_SysWMinfo windowManagerInfo;
SDL_VERSION(&windowManagerInfo.version);
SDL_GetWindowWMInfo(&window, &windowManagerInfo);
// Create a metal layer and add it to the view that SDL created.
NSView *sdlView = windowManagerInfo.info.cocoa.window.contentView;
sdlView.wantsLayer = YES;
CALayer *sdlLayer = sdlView.layer;
CGFloat contentsScale = sdlLayer.contentsScale;
NSSize layerSize = sdlLayer.frame.size;
_metalLayer = [[CAMetalLayer layer] retain];
_metalLayer.contentsScale = contentsScale;
_metalLayer.drawableSize = NSMakeSize(layerSize.width * contentsScale,
layerSize.height * contentsScale);
_metalLayer.device = device;
_metalLayer.pixelFormat = MTLPixelFormatBGRA8Unorm;
_metalLayer.frame = sdlLayer.frame;
_metalLayer.framebufferOnly = true;
[sdlLayer addSublayer:_metalLayer];
2) I create a depth texture to use as a depth buffer. My understanding is that this step is necessary in Metal. Though, in OpenGL, the framework creates a depth buffer for me quite automatically:
CGSize drawableSize = _metalLayer.drawableSize;
MTLTextureDescriptor *descriptor =
[MTLTextureDescriptorr texture2DDescriptorWithPixelFormat:MTLPixelFormatDepth32Float_Stencil8 width:drawableSize.width height:drawableSize.height mipmapped:NO];
descriptor.storageMode = MTLStorageModePrivate;
descriptor.usage = MTLTextureUsageRenderTarget;
_depthTexture = [_metalLayer.device newTextureWithDescriptor:descriptor];
_depthTexture.label = #"DepthStencil";
3) I create a depth-stencil state object which will be set at render time:
MTLDepthStencilDescriptor *depthDescriptor = [[MTLDepthStencilDescriptor alloc] init];
depthDescriptor.depthWriteEnabled = YES;
depthDescriptor.depthCompareFunction = MTLCompareFunctionLess;
_depthState = [device newDepthStencilStateWithDescriptor:depthDescriptor];
4) When creating my render pass object, I explicitly attach the depth texture:
_metalRenderPassDesc = [[MTLRenderPassDescriptor renderPassDescriptor] retain];
MTLRenderPassColorAttachmentDescriptor *colorAttachment = _metalRenderPassDesc.colorAttachments[0];
colorAttachment.texture = _drawable.texture;
colorAttachment.clearColor = MTLClearColorMake(0.2, 0.4, 0.5, 1.0);
colorAttachment.storeAction = MTLStoreActionStore;
colorAttachment.loadAction = desc.clear ? MTLLoadActionClear : MTLLoadActionLoad;
MTLRenderPassDepthAttachmentDescriptor *depthAttachment = _metalRenderPassDesc.depthAttachment;
depthAttachment.texture = depthTexture;
depthAttachment.clearDepth = 1.0;
depthAttachment.storeAction = MTLStoreActionDontCare;
depthAttachment.loadAction = desc.clear ? MTLLoadActionClear : MTLLoadActionLoad;
MTLRenderPassStencilAttachmentDescriptor *stencilAttachment = _metalRenderPassDesc.stencilAttachment;
stencilAttachment.texture = depthAttachment.texture;
stencilAttachment.storeAction = MTLStoreActionDontCare;
stencilAttachment.loadAction = desc.clear ? MTLLoadActionClear : MTLLoadActionLoad;
5) Finally, at render time, I set the depth-stencil object before drawing my object:
[_encoder setDepthStencilState:_depthState];
Note that if I go into step 3 and change depthCompareFunction to MTLCompareFunctionAlways or MTLCompareFunctionGreater then I see polygons on the screen, but ordering is (expectedly) incorrect. If I leave depthCompareFunction set to MTLCompareFunctionLess then I see nothing but the background color. It acts AS IF all fragments fail the depth test at all times.
The Metal API validator reports no errors and has no warnings...
I've tried a variety of combinations of settings for things like the depth-stencil texture format and have not made any forward progress. Honestly, I'm not sure what to try next.
EDIT: GPU Frame Capture in Xcode displays a green outline of my polygons, but none of those fragments are actually drawn.
EDIT 2: I've learned that the Metal API validator has an "Extended" mode. When this is enabled, I get these two warnings:
warning: Texture Usage Should not be Flagged as MTLTextureUsageRenderTarget: This texture is not a render target. Clear the MTLTextureUsageRenderTarget bit flag in the texture usage options. Texture = DepthStencil. Texture is used in the Depth attachment.
warning: Resource Storage Mode Should be MTLStorageModePrivate and it Should be Initialized with a Blit: This resource is rarely accessed by the CPU. Changing the storage mode to MTLStorageModePrivate and initializing it with a blit from a shared buffer may improve performance. Texture = 0x102095000.
When I head these two warnings, I get these two errors. (The warnings and errors seem to contradict one another.)]
error 'MTLTextureDescriptor: Depth, Stencil, DepthStencil, and Multisample textures must be allocated with the MTLResourceStorageModePrivate resource option.'
failed assertion `MTLTextureDescriptor: Depth, Stencil, DepthStencil, and Multisample textures must be allocated with the MTLResourceStorageModePrivate resource option.'
EDIT 3: When I run a sample Metal app and use the GPU frame capture tool then I see a gray scale representation of the depth buffer and the rendered object is clearly visible. This doesn't happen for my app. There, the GPU frame capture tool always shows my depth buffer as a plain white image.
Okay, I figured this out. I'm going to post the answer here to help the next guy. There was no problem writing to the depth buffer. This explains why spending time mucking with depth texture and depth-stencil-state settings was getting me nowhere.
The problem is differences in the coordinate systems used for Normalized Device Coordinates in Metal versus OpenGL. In Metal, NDC are in the space [-1,+1]x[-1,+1]x[0,1]. In OpenGL, NDC are [-1,+1]x[-1,+1]x[-1,+1]. If I simply take the projection matrix produced by glm::perspective and shove it through Metal then results will not be as expected. In order to compensate for the NDC space differences when rendering with Metal, that projection matrix must be left-multiplied by a scaling matrix with (1, 1, 0.5, 1) on the diagonal.
I found these links to be helpful:
1. http://blog.athenstean.com/post/135771439196/from-opengl-to-metal-the-projection-matrix
2. http://www.songho.ca/opengl/gl_projectionmatrix.html
EDIT: Replaced explanation with a more complete and accurate explanation. Replace solution with a better solution.

Best way to show many mini versions of painted InkCanvas in UWP

I have a problem to depict many mini-versions of my InkCanvas. In my app it is possible to write or draw on a InkCanvas. Now I want to depict all created InkCanvas in a GridView.
But with mini versions in this GridView i can not create enough mini versions.
I tested it with 36 mini versions and after I show one and navigate back, the App crashs everytime by rendering the same mini InkCanvas with the error: Insufficient Memory. So I searched an found this post:
Insufficient memory to continue the execution of the program when trying to initialize array of InkCanvas in UWP
I checked the Memory workload:
var AppUsageLevel = MemoryManager.AppMemoryUsageLevel;
var AppMemoryLimit = MemoryManager.AppMemoryUsageLimit;
and the memory has enough free space. (is this a bug?)
So I tried to render a image from my grid with a InkCanvas but the strokes were not rendered and all pictures were empty. ( can I save Memory this way?)
So now my question is:
Can someone tell me how to solve this problem? And what is the best way?
Thank you very much in advance!
Agredo
If you want to preview your drawings, better way is to render them to bitmap and show this bitmaps in grid instead of multiple complex controls InkCanvas is.
Here is some code to render inks to bitmap from another SO Answer:
CanvasDevice device = CanvasDevice.GetSharedDevice();
CanvasRenderTarget renderTarget = new CanvasRenderTarget(device, (int)inkCanvas.ActualWidth, (int)inkCanvas.ActualHeight, 96);
using (var ds = renderTarget.CreateDrawingSession())
{
ds.Clear(Colors.White);
ds.DrawInk(inkCanvas.InkPresenter.StrokeContainer.GetStrokes());
}
using (var fileStream = await file.OpenAsync(FileAccessMode.ReadWrite))
await renderTarget.SaveAsync(fileStream, CanvasBitmapFileFormat.Jpeg, 1f);
You also need to add Win2D.uwp nuget package to your project.

Unity3d 4.6 UI Image bug on some Windows 8 and 8.1

I'm making an interactive book for windows users and i'm using 4.6 UI system. I tested my application on lots of computers using various windows versions. It works fine with windows xp, windows 7, windows 8 and 8.1. But some of the windows 8 and 8.1 computers are producing a weird bug.
Here is how it should look like
and here it is in windows 8
Btw i have lots of images in my application. I'm putting them in my project with .bytes extension and creating sprite on runtime. My code to do this is :
void TextAssetToSprite(int pNo)
{
TextAsset tmp = textAssetArray[pNo] as TextAsset;
Texture2D imgTexture = new Texture2D(1, 1);
imgTexture.LoadImage(tmp.bytes);
Rect rectangle = new Rect(0, 0, imgTexture.width, imgTexture.height);
Vector2 pivot = new Vector2(0.5f, 0.5f);
Sprite firstSprite = Sprite.Create(imgTexture, rectangle, pivot);
imageControl.sprite = firstSprite;
tmp = null;
imgTexture = null;
Resources.UnloadUnusedAssets();
}
I don't know what i'm doing wrong. I've done hours of research but found nothing similar. When i create sprite in the editor and use it on UI image component it works as expected but it's not an option because there are lots of png images in my application and it's size will be too much. Please suggest me a way to fix this. Any help will be greatly appreciated.
It's been a long time since I posted this. I don't know if unity fixed the issue or not. Problem was when I was creating the Texture2D I was just giving it width and height parameters but some operating systems change the default settings for Texture2D. So here is the solution. Change
Texture2D imgTexture = new Texture2D(1, 1);
to
Texture2D imgTexture = new Texture2D(2, 2, TextureFormat.RGB24, false);

Resources