Why my GPU load is low when is rendering a scene with Blender? - render

I have a scene in Blender and I render it with GPU Compute, Cycles.
The problem is when the scene is rendering, my GPU load is under the 5% while the CPU is doing all the render work.
My GPU is a Nvidia GTX 745 and my Cpu an Intel core I5. The blender version that I am using for render is 2.82a.
Anybody knows why is this happening?

You could go into the blender preferences, (edit>preferences>system>cycles render devices) and try looking for your GPU, as you may have 'none' selected.
It might be under a different path tracing GPU option

Related

Qt Enterprise for IMX6 not using Hardware Acceleration?

We built an application which uses QT WebEngine to test WebGL functionality, it worked however the CPU utilization was very high (>30%) for rendering some sine waveforms, the root file system was provided by QT Enterprise as described here for IMX6
http://doc.qt.digia.com/QtEnterpriseEmbedded/qtee-preparing-hardware-imx6sabresd.html
On inspecting the root file system we found that there were no GPU drivers (usually libVivante.so and libVivante.ko for IMX6), so it looks like all the GL rendering is being done by CPU instead of GPU and thats the reason for high CPU Utilization, Does anybody know any other ways to enable hardware acceleration for WebGL in the QT WebEngine?
Qt WebEngine requires hardware acceleration to composite the layers of the page and you would probably not be able to see anything on the screen without it.
Chromium, behind Qt WebEngine, is quite a beast and is more designed for perceived smoothness than to yield CPU cycles; it will use all the resources it can to achieve this.
Any JavaScript WebGL call will go from the main render thread, then to the GPU process main thread to be decoded into GL calls to the driver. Each different WebGL canvas will trigger a different FBO to be used and bound, requiring GL context switching, and as often as possible, the latest state will trigger the Chromium compositor to kick in, send all the delegated scene to the browser process, to finally end in QtQuick's scene graph thread to be composited.
All this to say that a single JavaScript WebGL call triggers a much bigger machine than just telling OpenGL to draw those geometries. A CPU usage of 30% on this kind of device doesn't seem anormal to me, though there might be a way to avoid bottle necks.
The most efficient this could get is by having a custom QtQuick Scene Graph geometry as shown in this example: http://qt-project.org/doc/qt-5/qtquick-scenegraph-customgeometry-example.html, but even then I wouldn't expect a CPU usage under 10% on that device.

Explanation on MAJOR performance difference between LWJGL and other OpenGL implementations

Why would LWJGL be so much slower than say the Unity implementation of OpenGL or even Ogre3D? I'll begin with some "benchmarks" (if you would even call them that) on what I've tested.
Hardware:
i5 - 3570k # 4.3GHZ
GTX 780 # 1150 MHZ
First Test: Place 350,000 triangles on screen (modified Stanford Dragon)
Results:
GTX 780 Renders at 37 FPS (USING LWJGL)
GTX 780 Renders at ~300 FPS (USING UNITY3D)
GTX 780 Renders at ~280 FPS (USING OGRE3D)
Second Test: Render Crytek Sponza w/ Textures (I believe around 200,000 vertices?)
Results:
GTX 780 Renders at 2 FPS (USING LWJGL)
GTX 780 Renders at ~150 FPS (USING UNITY3D)
GTX 780 Renders at ~130 FPS (USING OGRE3D)
Normally I use either Ogre3D, Unity3D, or Panda3D in order to render my game projects, but the difference in frame rates is staggering. I know Unity has things like Occlusion Culling so it's generally the quickest, but even when using similar calls with Ogre3D, I would think to expect similar results to LWJGL... Ogre3D and LWJGL are both doing Front face only culling, but LWJGL doesn't get any sort of performance increase vs. rendering everything. One last thing, LWJGL tends to break 2.5 GB of RAM usage rendering Sponza, but that doesn't explain the other results.
If anyone is having the same issue, the issue is NOT java I've realized. The use of recording immediate draw calls into Display Lists is depreciated and it yields poor performance. You MUST use VBOs and not display list. You can expect performance to increase up to 600x in the case of my laptop.

Is there a huge significant difference between the regular canvas and openGL in terms of framerate?

I've been playing around with graphics in android and I noticed that it takes a lot of time and resources to draw a bitmaps with the canvas. Especially in high end games which require many images to be drawn at once, this could be pretty bad for things such as the framerate. If I decide to learn and use openGL, would it make a big difference? Or maybe I'm not using the canvas right?
It depends on what version of Android you're talking about.
In android version 2.X, all canvas operations are not hardware accelerated, so it's not using the GPU at all, and it processes everything pixel by pixel on the CPU.
In either Android 3 or 4 (I forget which one exactly), hardware acceleration was added to canvas so that you could have a GPU accelerated canvas.
OpenGLES always uses hardware acceleration, so for android 2.X, it will always be much much faster than a canvas (this is your only real option for any kind of game that needs a reasonable framerate).
In hardware accelerated android, you probably won't notice much of a difference between canvas and OpenGL, because they both leverage the GPU, provided that your canvas has hardware acceleration enabled.

Do Graphics Cards boost speed of other rendering when we don't invoke DirectX or OpenGL?

I am curious about how Graphics Card works in general. Please enlighten me.
If we don't make a call to a graphics library such as DirectX or OpenGL, does Graphics Card render every other things on screen as well? Or all these calculation for rendering depend on the CPU and are rendered by the CPU?
For instance, if I am to create a simple program that will load an image and render it on a window frame, without using DirectX or OpenGL, does having a faster graphics card render this image faster in this case? Or will this solely depend on the CPU if we don't use DirectX or OpenGL?
The simple answer is "yes", in a modern OS the graphics card does render most everything on the screen. This isn't quite a 'graphics card' question, but rather a OS question. The card has been able to do this since 3dfx days, but the OS did not use it for things like window compositing until recently.
For your example, the answer really depends on the API you use to render your window. One could imagine an API that is far removed from the OS and chose to always keep the image data in CPU memory. If every frame is displayed by blitting the visible portion from CPU to GPU, the GPU would likely not be the bottleneck (PCIE probably would be). But, other APIs (hopefully the one you use) could store the image data in the GPU memory and the visible portion could be displayed from GPU memory without traversing PCIE every frame. That said, the 'decoration' part of the window is likely drawn by a series of OpenGL or DX calls.
Hopefully that answers things well enough?

Matlab GPU acceleration for loading large point cloud dataset

I'm trying to load a large dataset of a million points in 3d space in MATLAB, but whenever I try to plot it (scatter or plot3) , it takes forever. This is on a laptop with Intel Graphics Media Accelerator 950, up to 224-MB shared system memory. This also sometimes leads to Matlab 2008a crashing. Is there a way to let MATLAB use a Nvidia GPU for plotting this dataset. I have another laptop with Nvidia Go 6150. I'm on Windows Xp and Windows 7.
OpenGL
You can set the renderer used for figures in MATLAB.
http://www.mathworks.com/support/tech-notes/1200/1201.html
To take advantage of GPU, You can set it to OpenGL
set(0,'DefaultFigureRenderer','opengl')
Which
enables MATLAB to access graphics hardware if it is available on your machine. It provides object transparency, lighting, and accelerated performance.
Other ways
Also, the following link shows some ideas about optimizing graphics performance:
http://www.mathworks.com/access/helpdesk/help/techdoc/creating_plots/f7-60415.html
However,
These techniques apply to cases when you are creating many graphs of similar data and can improve rendering speed by preventing MATLAB from performing unnecessary operation.
If you're wanting to use CUDA the minimum card spec required is a G80, your 6150 sadly is too old.
List of compatible cards.
There is Jacket, the commercial product to give GPU power to Matlab:
http://www.accelereyes.com/products/jacket
You can download the trial version (30 days, as I remember).

Resources