webgl performance cost of switching shader and texture - performance

I.To switching a shader effect which way is better?
1.using a big shader program and using uniform an if/else clause in shader program to use difference effect.
2.switch program between calls.
II.Is it better to use a big texture or use several small texture? And does upload texture cost mush,how about bind texture?

Well, it would probably be best to write some perf tests and try it but in general.
Small shaders are faster than big.
1 texture is faster the many textures
uploading textures is slow
binding textures is fast
switching programs is slow but usually much faster than combining 2 small programs into 1 big program.
Fragment shaders in particular get executed millions of times a frame. A 1920x1080 display has 2 million pixels so of there was no overdraw that would still mean your shader gets executed 2 million times per frame. For anything executed 2 million times a frame, or 120 million times a second of you're targeting 60 frames per second, smaller is going to be better.
As for textures, mips are faster than no mips because the GPU has a cache for textures and if the pixel it needs next are near the ones it previously read they'll likely already be in the cache. If they are far away they won't be in the cache. That also means randomly reading from a texture is particularly slow. But most apps read fairly linearly through a texture.
Switching programs is slow enough that sorting models by which program they use so that you draw all models that use program A first then all models that use program B is generally faster than drawing them in a random order. But there are other things the effect performance too. For example if a large model is obscuring a small model it's better to draw the large model first since the small model will then fail the depth test (z-buffer) and will not have its fragment shader executed for any pixels. So it's a trade off. All you can really do is test your particular application.
Also, it's important to test in the correct way.
http://updates.html5rocks.com/2012/07/How-to-measure-browser-graphics-performance

Related

Rendering Pipeline -- Performance -- Scaling with regard to amount of pixels

I had a discussion with a friend about two questions regarding the performance of the OpenGL Rendering Pipeline, and we would like to ask for help in determining who is right.
I argued that the throughput scales linearly with the amount of pixels involved, and therefore rendering a 4k scene should take 4 times as long as rendering a 1080p scene. Then we discovered this resolution-fps comparison video [see 1], and the scaling does not seem to be linear. Could someone explain why this is the case?
I argued that rendering a 1080p scene and rendering every 1/4 pixel in a 4k scene should have the same performance, as in both cases the same amount of pixels are drawn [see 2]. My friend argued that this is not the case, as adjunct pixel calculations can be done with one instructions. Is a right? And if so, could someone explain how this works in practice?
Video
Illustration:
I argued that the throughput scales linearly with the amount of pixels involved, and therefore rendering a 4k scene should take 4 times as long as rendering a 1080p scene. Then we discovered this resolution-fps comparison video [see 1], and the scaling does not seem to be linear. Could someone explain why this is the case?
Remember: rendering happens in a pipeline. And rendering can only happen at the speed of the slowest part of that pipeline. Which part that is depends entirely on what you're rendering.
If you're shoving 2M triangles per frame at a GPU, and the GPU can only render 60M triangles per second, the highest framerate you will ever see is 30FPS. Your performance is bottlenecked on the vertex processing pipeline; the resolution you render to is irrelevant to the number of triangles in the scene.
Similarly, if you're rendering 5 triangles per frame, it doesn't matter what your resolution is; your GPU can chew that up in micro-seconds, and will be sitting around waiting for more. Your performance is bottlenecked on how much you're sending.
Resolution only scales linearly with performance if you're bottlenecked on the parts of the rendering pipeline that actually matter to resolution: rasterization, fragment processing, blending, etc. If those aren't your bottleneck, there's no guarantee that your performance will be impacted from increasing the resolution.
And it should be noted that modern high-performance GPUs require being forced to render a lot of stuff before they'll be bottlenecked on the fragment pipeline.
I argued that rendering a 1080p scene and rendering every 1/4 pixel in a 4k scene should have the same performance, as in both cases the same amount of pixels are drawn [see 2]. My friend argued that this is not the case, as adjunct pixel calculations can be done with one instructions. Is a right?
That depends entirely on how you manage to cause the system to "render every 1/4 pixel in a 4k scene". Rasterizers generally don't go around skipping pixels. So how do you intend to make the GPU pull off this feat? With a stencil buffer?
Personally, I can't imagine a way to pull this off without breaking SIMD, but I won't say it's impossible.
And if so, could someone explain how this works in practice?
You're talking about the very essence of Single-Instruction, Multiple Data (SIMD).
When you render a triangle, you execute a fragment shader on every fragment generated by the rasterizer. But you're executing the same fragment shader program on each of them. Each FS that operates on a fragment uses the same source code. They have the same "Single-Instructions".
The only difference between them is really the data they start with. Each fragment contains the interpolated per-vertex values provided by vertex processing. So they have "Multiple" sets of "Data".
So if they're all going to be executing the same instructions over different initial values... why bother executing them separately? Just execute them using SIMD techniques. Each opcode is executed on different sets of data. So you only have one hardware "execution unit", but that unit can process 4 (or more) fragments at once.
This execution model is basically why GPUs work.

Three.js: What's the upper limit for holding 60 FPS on an average desktop?

I'm currently working on a game using Three.js. I've been studying software engineering for four years and have been working professionally on backends for two, but I've barely touched on graphics aside from some simple Unity experimenting.
I currently have ~22,000 vertices and ~8,000 faces according to renderstats.js, and my desktop (above average) can't run it above 20 FPS. I'm using Lambert material as well as a single ambient light, so I feel like this isn't too much to ask.
With these figures in mind, is this the expected behavior for three.js rendering?
I would be pretty sure that is not end of the line and you are probably missing some possibilities for massive performance-improvements.
But just to give you some numbers first,
if you leave everything fancy away (including three.js) and just render an ultra-simple point-cloud with one fragment rendered per point, you can easily get to rendering 10-20 million (yes, million) points/vertices on an average GPU.
just with simple shapes and material, I already got three.js to render something in the range of 500k triangles (at 1080p-resolution) at 60FPS without problem. You can probably take those numbers times 10 for latest high-end GPUs.
However, these kinds of numbers are not really helpful.
Some hints:
if you want to debug your rendering-performance, you should first add some metrics. Renderstats is good, but I'd recommend integrating http://spite.github.io/rstats/ for this (see the example).
generally the choice of material shouldn't matter too much, the GPU is way more capable than most people think. It's more likely a problem somewhere else in the pipeline. EDIT from comment: In some cases, like hi-resolution displays with slow GPUs (think mobile-devices) this might be less true and complicated shader-code can slow down your site, but it might worth be looking at the other points first. As the rendering itself happens off-thread (so you can't measure it's duration using regular tools like the devtools-profiler), you can use the EXT_disjoint_timer_query-extension to get some information about what is going on on the GPU.
the number of drawcalls shouldn't be too high: three.js needs to do a single drawcall for every Mesh and Points-object rendered in the scene and too many objects are generally a far bigger problem than objects with lots of vertices. Reducing the number of drawcalls can be done by merging multiple geometries into one and making use of multi-materials, vertex-colors and things like that.
if you are doing postprocessing, the GPU needs to render every pixel on screen several times. This might as well massively limit your performance. This can be optimized by merging multiple postprocessing-passes into one (I admit, that'd be a lot of hard work..)
another problem could be on the JS side: you should use the profiler or timeline-view from the chrome devtools to see if maybe it's the javascript that is taking too much time per frame (shouldn't be more than 8-12ms per frame). I've been told there are ways to optimize the javascript-performance as well :)

Is it better to use a single texture or multiple textures for a YUV image

This question is for OpenGL ES 2.0 (on Android) but may be more general to OpenGL.
Ultimately all performance questions are implementation-dependent, but if anyone can answer this question in general or based on their experience that would be helpful. I'm writing some test code as well.
I have a YUV (12bpp) image I'm loading into a texture and color-converting in my fragment shader. Everything works fine but I'd like to see where I can improve performance (in terms of frames per second).
Currently I'm actually loading three textures for each image - one for the Y component (of type GL_LUMINANCE), one for the U component (of type GL_LUMINANCE and of course 1/4 the size of the Y component), and one for the V component (of type GL_LUMINANCE and of course 1/4 the size of the Y component).
Assuming I can get the YUV pixels in any arrangement (e.g. the U and V in separate planes or interspersed), would it be better to consolidate the three textures into only two or only one? Obviously it's the same number of bytes to push to the GPU no matter how you do it, but maybe with fewer textures there would be less overhead. At the very least, it would use fewer texture units. My ideas:
If the U and V pixels were interspersed with each other, I could load them in a single texture of type GL_LUMINANCE_ALPHA which has two components.
I could load the entire YUV image as a single texture (of type GL_LUMINANCE but 3/2 the size of the image) and then in the fragment shader I could call texture2D() three times on the same texture, doing a bit of arithmetic figure out the correct co-ordinates to pass to texture2D to get the correct texture co-ordinates for the Y, U and V components.
I would combine the data into as few textures as possible. Fewer textures is usually a better option for a few reasons.
Fewer state changes to setup the draw call.
The fewer texture fetches in a fragment shader the better.
Less upload time.
Sources:
I understand some of these are focused on more specific hardware, but the principles apply to most Mobile graphics architectures.
Best Practices for Working with Texture Data
Optimize OpenGL for Tegra
Optimizing performance of a heavy fragment shader
"Binding to a texture takes time for OpenGL ES to process. Apps that reduce the number of changes they make to OpenGL ES state perform better. "
"In my experience mobile GPU performance is roughly proportional to the number of texture2D calls." "There are two texture loads, so the minimum cycle count for the texture sub-unit is two." (Tegra has a texture unit which has to run a cycle for reach texture read)
"making calls to the glTexSubImage and glCopyTexSubImage functions particularly expensive" - upload operations must stall the pipeline until textures are uploaded. It is faster to batch these into a single upload than block a bunch of separate times.

DirectX9 - Efficiently Drawing Sprites

I'm trying to create a platformer game, and I am taking various sprite blocks, and piecing them together in order to draw the level. This requires drawing a large number of sprites on the screen every single frame. A good computer has no problem handling drawing all the sprites, but it starts to impact performance on older computers. Since this is NOT a big game, I want it to be able to run on almost any computer. Right now, I am using the following DirectX function to draw my sprites:
D3DXVECTOR3 center(0.0f, 0.0f, 0.0f);
D3DXVECTOR3 position(static_cast<float>(x), static_cast<float>(y), z);
(my LPD3DXSPRITE object)->Draw((sprite texture pointer), NULL, &center, &position, D3DCOLOR_ARGB(a, r, g, b));
Is there a more efficient way to draw these pictures on the screen? Is there a way that I can use less complex picture files (I'm using regular png's right now) to speed things up?
To sum it up: What is the most performance friendly way to draw sprites in DirectX? thanks!
The ID3DXSPRITE interface you are using is already pretty efficient. Make sure all your sprite draw calls happen in one batch if possible between the sprite begin and end calls. This allows the sprite interface to arrange the draws in the most efficient way.
For extra performance you can load multiple smaller textures in to one larger texture and use texture coordinates to get them out. This makes it so textures don't have to be swapped as frequently. See:
http://nexe.gamedev.net/directknowledge/default.asp?p=ID3DXSprite
The file type you are using for the textures does not matter as long as they are are preloaded into textures. Make sure you load them all in to textures once when the game/level is loading. Once you have loaded them in to textures it does not matter what format they were originally in.
If you still are not getting the performance you want, try using PIX to profile your application and find where the bottlenecks really are.
Edit:
This is too long to fit in a comment, so I will edit this post.
When I say swapping textures I mean binding them to a texture stage with SetTexture. Each time SetTexture is called there is a small performance hit as it changes the state of the texture stage. Normally this delay is fairly small, but can be bad if DirectX has to pull the texture from system memory to video memory.
ID3DXsprite will reorder the draws that are between begin and end calls for you. This means SetTexture will typically only be called once for each texture regardless of the order you draw them in.
It is often worth loading small textures into a large one. For example if it were possible to fit all small textures in to one large one, then the texture stage could just stay bound to that texture for all draws. Normally this will give a noticeable improvement, but testing is the only way to know for sure how much it will help. It would look terrible, but you could just throw in any large texture and pretend it is the combined one to test what performance difference there would be.
I agree with dschaeffer, but would like to add that if you are using a large number different textures, it may better to smush them together on a single (or few) larger textures and adjust the texture coordinates for different sprites accordingly. Texturing state changes cost a lot and this may speed things up on older systems.

graphics: best performance with floating point accumulation images

I need to speed up some particle system eye candy I'm working on. The eye candy involves additive blending, accumulation, and trails and glow on the particles. At the moment I'm rendering by hand into a floating point image buffer, converting to unsigned chars at the last minute then uploading to an OpenGL texture. To simulate glow I'm rendering the same texture multiple times at different resolutions and different offsets. This is proving to be too slow, so I'm looking at changing something. The problem is, my dev hardware is an Intel GMA950, but the target machine has an Nvidia GeForce 8800, so it is difficult to profile OpenGL stuff at this stage.
I did some very unscientific profiling and found that most of the slow down is coming from dealing with the float image: scaling all the pixels by a constant to fade them out, and converting the float image to unsigned chars and uploading to the graphics hardware. So, I'm looking at the following options for optimization:
Replace floats with uint32's in a fixed point 16.16 configuration
Optimize float operations using SSE2 assembly (image buffer is a 1024*768*3 array of floats)
Use OpenGL Accumulation Buffer instead of float array
Use OpenGL floating-point FBO's instead of float array
Use OpenGL pixel/vertex shaders
Have you any experience with any of these possibilities? Any thoughts, advice? Something else I haven't thought of?
The problem is simply the sheer amount of data you have to process.
Your float buffer is 9 megabytes in size, and you touch the data more than once. Most likely your rendering loop looks somewhat like this:
Clear the buffer
Render something on it (uses reads and writes)
Convert to unsigned bytes
Upload to OpenGL
That's a lot of data that you move around, and the cache can't help you much because the image is much larger than your cache. Let's assume you touch every pixel five times. If so you move 45mb of data in and out of the slow main memory. 45mb does not sound like much data, but consider that almost each memory access will be a cache miss. The CPU will spend most of the time waiting for the data to arrive.
If you want to stay on the CPU to do the rendering there's not much you can do. Some ideas:
Using SSE for non temporary loads and stores may help, but they will complicate your task quite a bit (you have to align your reads and writes).
Try break up your rendering into tiles. E.g. do everything on smaller rectangles (256*256 or so). The idea behind this is, that you actually get a benefit from the cache. After you've cleared your rectangle for example the entire bitmap will be in the cache. Rendering and converting to bytes will be a lot faster now because there is no need to get the data from the relative slow main memory anymore.
Last resort: Reduce the resolution of your particle effect. This will give you a good bang for the buck at the cost of visual quality.
The best solution is to move the rendering onto the graphic card. Render to texture functionality is standard these days. It's a bit tricky to get it working with OpenGL because you have to decide which extension to use, but once you have it working the performance is not an issue anymore.
Btw - do you really need floating point render-targets? If you get away with 3 bytes per pixel you will see a nice performance improvement.
It's best to move the rendering calculation for massive particle systems like this over to the GPU, which has hardware optimized to do exactly this job as fast as possible.
Aaron is right: represent each individual particle with a sprite. You can calculate the movement of the sprites in space (eg, accumulate their position per frame) on the CPU using SSE2, but do all the additive blending and accumulation on the GPU via OpenGL. (Drawing sprites additively is easy enough.) You can handle your trails and blur either by doing it in shaders (the "pro" way), rendering to an accumulation buffer and back, or simply generate a bunch of additional sprites on the CPU representing the trail and throw them at the rasterizer.
Try to replace the manual code with sprites: An OpenGL texture with an alpha of, say, 10%. Then draw lots of them on the screen (ten of them in the same place to get the full glow).
If you by "manual" mean that you are using the CPU to poke pixels, I think pretty much anything you can do where you draw textured polygons using OpenGL instead will represent a huge speedup.

Resources