Improve performance by reusing the alpha channel of a RGB texture? - opengl-es

I have a 48bit texture RGB16F.
https://www.khronos.org/registry/OpenGL-Refpages/es3.0/html/glTexImage2D.xhtml
states that when using RGB. 1.0 will be put into the alpha channel.
Is 1.0 implicit or actually stored?
And in the latter case. My main question:
If i put my 16bit heightmap into the alpha channel, so it becomes RGBA16F.
Will I improve performance?
All insights are welcome.

Is 1.0 implicit or actually stored?
That's implementation specific. If you were asking about 888 vs 8888 textures, I'd tell you that pretty much every implementation is bound to use 32 bits per texel, but I'm not so sure for 16F formats. It is telling that Metal doesn't define an RGB16F format (link) which strongly suggests that PowerVR GPUs at least will pad the format. Vulkan does define RGB16F, but while the spec requires support for R16F, RG16F and RGBA16F it doesn't require support for RGB16F (link), again suggesting lack of native support by some vendors. I wouldn't be surprised if some GPU somewhere does support RGB16F, but I suspect most would just pad. For a more definitive answer you might need to post questions on the GPU forums or experiment by examining memory usage in some controlled conditions.
And in the latter case. My main question: If i put my 16bit heightmap into the alpha channel, so it becomes RGBA16F. Will I improve performance?
Are you sampling it at the same time (i.e. from the same shader, with the same UVs)? If so, then yes absolutely it will be a better choice than using an RGB16F plus a R16F. If they're not sampled together (e.g. the heightmap is sampled in the vertex shader, the colour in the fragment shader), then it's harder to guess. Probably you'd be harming performance on the heightmap fetch (those extra bytes blowing the cache), but leaving the colour fetch unharmed (there was padding there anyway) - overall you'd lose some performance but save some memory - any performance loss is probably pretty minor and if your bottleneck lies elsewhere it may not do any harm at all.

Is 1.0 implicit or actually stored?
I suspect "both", although perhaps not in the way you mean.
Most GPU samplers support implicit rules for missing channels (0.0 for color, 1.0 for alpha), and using these is lower power than sampling / filtering from memory, so I would expect this to use implict loads for the missing channels.
However, hardware is also usually allergic to loading things which are not a power of two in size (things which span cache line boundaries typically take two cycles to load on most cache architectures), so I would also expect each texel to be padded out to 64-bits each. What the 16-bits of padding contains may not be 1.0, as the hardware doesn't care because it's using implicit rules.

Related

Can I avoid texture gradient calculations in webgl?

We have a webgl/three.js application that makes extensive use of texture buffers for passing data between passes and for storing arrays of data. None of these has any use for mipmaps. We are easily able to prevent mipmap generation: at the three.js level we set min and mag filters to NearestFilter, and set generateMipmaps false.
However, the shaders do not know at compile time that there is no mipmapping. When compiled using ANGLE we get a lot of warning messages:
warning X4121: gradient-based operations must be moved out of flow control to prevent divergence. Performance may improve by using a non-gradient operation
I have recoded so that the flow around such lookups is (optionally) avoided.
On my Windows/NVidia machine using the conditional flows improves performance and does not cause any visual issues (but does cause the messages).
I don't want the texture lookups to be gradient-based operations. What I would like to do is to write the shaders in such a way that they know at compile time that there is no decision to be made; which should (marginally) improve performance and also make the messages go away. However, I cannot see any way to do this in GLSL for GLES 2 (as used by webgl). It can be done in later versions with textureLodOffset() and various other ways. The only control in level 2 I can see is the bias option on texture2D(), but that is a bias not an absolute value and so does not resolve the issue. So, finally ...
Question: Do you know any way to prevent lod calculation in WEBGL level GLSL shaders?
You might try ensuring:
Using gl_FragCoord instead of a user varying
NEAREST is set before texImage2d, instead of after

Which GLSL Multi Colour Linear/Radial Gradients Strategy to use?

I'm developing using OpenGL ES 2 & GLSL and I'm stuck on how to approach multi coloured / fractioned gradients ( linear and radial ).
I don't know which approach is the best practice:
Get a texture of the gradient colours & sample this in the fragment Shader ( essentially working with a regular texture ).
Computer generate a texture of the gradient first & sample this in the fragment Shader as above ( no need for PNGs etc of the gradient ) - caching this texture to save regeneration.
Use interpolation in the fragment Shader to calculate the fragment value by fragment position - this looks like I'd have to use multiple ifs, a loop, stuff you don't want executed per fragment.
Other strategy I haven't conceived of.
I know this question is a bit on the subjective side, but having looked around online for information I've not found anything concrete about how to proceed...
Well, I can tell you how to proceed, but you may not like the answer. ;) The main two approaches are sampling a texture, or doing shader calculations. To decide which one is more efficient in your case, you need to implement both, and start benchmarking. There are way too many factor influencing the performance of each to give a generic answer.
One of the major factors is of course how complex your calculations are. But modern GPUs have very high raw performance for pure calculations. Not quite as much for the mobile GPUs you're most likely using since you're asking about ES, but even the latest mobile GPUs have become quite powerful. Branches aren't free, but not necessarily as harmful as you might expect.
On the other hand, texture sampling looks like a single operation in the shader, but based on that alone you should not assume that it's automatically faster than executing a bunch of computations. Texture sampling performance can be limited by many factors, including throughput of the texture sampling hardware units, memory bandwidth, cache hit rates, etc. Particularly if your textures need to be fairly large to give you the necessary precision, memory bandwidth can hurt you, and accessing memory on a mobile device consumes significant power. Also, just the additional memory usage is undesirable since you mostly deal with very constrained amounts of memory.
Of course the performance characteristics can vary greatly between different GPUs. So if you want to make reliable conclusions, you need to benchmark on a variety of devices.
For the approach where you implement the computations in the shader, make sure that it is as optimal as it can be. Avoid branches where reasonably possible, or at least benchmark various options to see how much the branches hurt performance. If there are parts of the computation that are the same for each fragment, pre-compute the values and pass them into the shader. Replace expensive operations by cheaper ones where possible. For example, instead of dividing by a uniform value, pass in the inverse as a uniform, and use a multiplication instead. Use vector operations where possible.

OpenGL ES 2.0: glUseProgram vs glUniform performance

Which is faster, a single call to glUseProgram, or sending e.g. 6 or so floats via glUniform (batched or separately), and by approximately how much?
Can you describe in more detail the scenario where you think this affects the performance of the rendering pipeline? They offer completely different functionalities and I don't see why you would care about the performance of glUseProgram vs glUniform.
Now let's analyze what happens when you use this functions to get an idea of their cost.
When you call glUseProgram it changes several OpenGL rendering states because we are going to use new shaders attached to the program object. The specification says that vertex and fragment programs are installed in the processors when you invoke this function. That alone seems costly enough to overshadow the cost of glUniform. Also, when you install new vertex and fragment programs, additional states of the rendering pipeline are changed to accomodate the number of texture units and data layout used by the programs.
glUniform copies data from one location of memory to another to specify the value of an uniform variable. The worst case would be copying matrices which seems less complex than glUseProgram.
But in the end, it all depends of the amount of data you are transferring with glUniform and the underlying implementation of glUseProgram (it could be super optimized by the driver and have a very small cost) and if your engine is smart enough to group the geometry that uses the same program and draw it without changing states.

Can raymarching be accelerated under an SIMD architecture?

The answer would seem to be no, because raymarching is highly conditional i.e. each ray follows a unique execution path, since on each step we check for opacity, termination etc. that will vary based on the direction of the individual ray.
So it would seem that SIMD would largely not be able to accelerate this; rather, MIMD would be required for acceleration.
Does this make sense? Or am I missing something(s)?
As stated already, you could probably get a speedup from implementing your
vector math using SSE instructions (be aware of the effects discussed
here - also for the other approach). This approach would allow the code
stay concise and maintainable.
I assume, however, your question is about "packet traversal" (or something
like it), in other words to process multiple scalar values each of a
different ray:
In principle it should be possible deferring the shading to another pass.
The SIMD packet could be repopulated with a new ray once the bare marching
pass terminates and the temporary result is stored as input for the shading
pass. This will allow to parallelize a certain, case-dependent percentage
of your code exploting all four SIMD lanes.
Tiling the image and indexing the rays within it in Morton-order might be
a good idea too in order to avoid cache pressure (unless your geometry is
strictly procedural).
You won't know whether it pays off unless you try. My guess is, that if it
does, the amount of speedup might not be worth the complication of the code
for just four lanes.
Have you considered using an SIMT architecture such as a programmable GPU?
A somewhat up-to-date programmable graphics board allows you to perform
raymarching at interactive rates (see it happen in your browser here).
The last days I built a software-based raymarcher for a menger sponge. At the moment without using SIMD and I also used no special algorithm. I just trace from -1 to 1 in X and Y, which are U and V for the destination texture. Then I got a camera position and a destination which I use to calculate the increment vector for the raymarch.
After that I use a constant value of iterations to perform, in which only one branch decides if there's an intersection with the fractal volume. So if my camera eye is E and my direction vector is D I have to find the smallest t. If I found that or reached a maximal distance I break the loop. At the end I have t - from that I calculate the fragment color.
In my opinion it should be possible to parallelize these operations by SSE1/2, because one can solve the branch by null'ing the field in the vector (__m64 / __m128), so further SIMD operations won't apply here. It really depends on what you raymarch/-cast but if you just calculate a fragment color from a function (like my fractal curve here is) and don't access memory non-linearly there are some tricks to make it possible.
Sure, this answer contains speculation, but I will keep you informed when I've parallelized this routine.
Only insofar as SSE, for instance, lets you do operations on vectors in parallel.

Improving raytracer performance

I'm writing a comparatively straightforward raytracer/path tracer in D (http://dsource.org/projects/stacy), but even with full optimization it still needs several thousand processor cycles per ray. Is there anything else I can do to speed it up? More generally, do you know of good optimizations / faster approaches for ray tracing?
Edit: this is what I'm already doing.
Code is already running highly parallel
temporary data is structured in a cache-efficient fashion as well as aligned to 16b
Screen divided into 32x32-tiles
Destination array is arranged in such a way that all subsequent pixels in a tile are sequential in memory
Basic scene graph optimizations are performed
Common combinations of objects (plane-plane CSG as in boxes) are replaced with preoptimized objects
Vector struct capable of taking advantage of GDC's automatic vectorization support
Subsequent hits on a ray are found via lazy evaluation; this prevents needless calculations for CSG
Triangles neither supported nor priority. Plain primitives only, as well as CSG operations and basic material properties
Bounding is supported
The typical first order improvement of raytracer speed is some sort of spatial partitioning scheme. Based only on your project outline page, it seems you haven't done this.
Probably the most usual approach is an octree, but the best approach may well be a combination of methods (e.g. spatial partitioning trees and things like mailboxing). Bounding box/sphere tests are a quick cheap and nasty approach, but you should note two things: 1) they don't help much in many situations and 2) if your objects are already simple primitives, you aren't going to gain much (and might even lose). You can more easily (than octree) implement a regular grid for spatial partitioning, but it will only work really well for scenes that are somewhat uniformly distributed (in terms of surface locations)
A lot depends on the complexity of the objects you represent, your internal design (i.e. do you allow local transforms, referenced copies of objects, implicit surfaces, etc), as well as how accurate you're trying to be. If you are writing a global illumination algorithm with implicit surfaces the tradeoffs may be a bit different than if you are writing a basic raytracer for mesh objects or whatever. I haven't looked at your design in detail so I'm not sure what, if any, of the above you've already thought about.
Like any performance optimization process, you're going to have to measure first to find where you're actually spending the time, then improving things (algorithmically by preference, then code bumming by necessity)
One thing I learned with my ray tracer is that a lot of the old rules don't apply anymore. For example, many ray tracing algorithms do a lot of testing to get an "early out" of a computationally expensive calculation. In some cases, I found it was much better to eliminate the extra tests and always run the calculation to completion. Arithmetic is fast on a modern machine, but a missed branch prediction is expensive. I got something like a 30% speed-up on my ray-polygon intersection test by rewriting it with minimal conditional branches.
Sometimes the best approach is counter-intuitive. For example, I found that many scenes with a few large objects ran much faster when I broke them down into a large number of smaller objects. Depending on the scene geometry, this can allow your spatial subdivision algorithm to throw out a lot of intersection tests. And let's face it, intersection tests can be made only so fast. You have to eliminate them to get a significant speed-up.
Hierarchical bounding volumes help a lot, but I finally grokked the kd-tree, and got a HUGE increase in speed. Of course, building the tree has a cost that may make it prohibitive for real-time animation.
Watch for synchronization bottlenecks.
You've got to profile to be sure to focus your attention in the right place.
Is there anything else I can do to speed it up?
D, depending on the implementation and compiler, puts forth reasonably good performance. As you haven't explained what ray tracing methods and optimizations you're using already, then I can't give you much help there.
The next step, then, is to run a timing analysis on the program, and recode the most frequently used code or slowest code than impacts performance the most in assembly.
More generally, check out the resources in these questions:
Literature and Tutorials for Writing a Ray Tracer
Anyone know of a really good book about Ray Tracing?
Computer Graphics: Raytracing and Programming 3D Renders
raytracing with CUDA
I really like the idea of using a graphics card (a massively parallel computer) to do some of the work.
There are many other raytracing related resources on this site, some of which are listed in the sidebar of this question, most of which can be found in the raytracing tag.
I don't know D at all, so I'm not able to look at the code and find specific optimizations, but I can speak generally.
It really depends on your requirements. One of the simplest optimizations is just to reduce the number of reflections/refractions that any particular ray can follow, but then you start to lose out on the "perfect result".
Raytracing is also an "embarrassingly parallel" problem, so if you have the resources (such as a multi-core processor), you could look into calculating multiple pixels in parallel.
Beyond that, you'll probably just have to profile and figure out what exactly is taking so long, then try to optimize that. Is it the intersection detection? Then work on optimizing the code for that, and so on.
Some suggestions.
Use bounding objects to fail fast.
Project the scene at a first step (as common graphic cards do) and use raytracing only for light calculations.
Parallelize the code.
Raytrace every other pixel. Get the color in between by interpolation. If the colors vary greatly (you are on an edge of an object), raytrace the pixel in between. It is cheating, but on simple scenes it can almost double the performance while you sacrifice some image quality.
Render the scene on GPU, then load it back. This will give you the first ray/scene hit at GPU speeds. If you do not have many reflective surfaces in the scene, this would reduce most of your work to plain old rendering. Rendering CSG on GPU is unfortunately not completely straightforward.
Read source code of PovRay for inspiration. :)
You have first to make sure that you use very fast algorithms (implementing them can be a real pain, but what do you want to do and how far want you to go and how fast should it be, that's a kind of a tradeof).
some more hints from me
- don't use mailboxing techniques, in papers it is sometimes discussed that they don't scale that well with the actual architectures because of the counting overhead
- don't use BSP/Octtrees, they are relative slow.
- don't use the GPU for Raytracing, it is far too slow for advanced effects like reflection and shadows and refraction and photon-mapping and so on ( i use it only for shading, but this is my beer)
For a complete static scene kd-Trees are unbeatable and for dynamic scenes there are clever algorithms there that scale very well on a quadcore (i am not sure about the performance above).
And of course, for a realy good performance you need to use very much SSE code (with of course not too much jumps) but for not "that good" performance (im talking here about 10-15% maybe) compiler-intrinsics are enougth to implement your SSE stuff.
And some decent Papers about some Algorithms i was talking about:
"Fast Ray/Axis-Aligned Bounding Box - Overlap Tests using Ray Slopes"
( very fast very good paralelisizable (SSE) AABB-Ray hit test )( note, the code in the paper is not all code, just google for the title of the paper, youll find it)
http://graphics.tu-bs.de/publications/Eisemann07RS.pdf
"Ray Tracing Deformable Scenes using Dynamic Bounding Volume Hierarchies"
http://www.sci.utah.edu/~wald/Publications/2007///BVH/download//togbvh.pdf
if you know how the above algorithm works then this is a much greater algorithm:
"The Use of Precomputed Triangle Clusters for Accelerated Ray Tracing in Dynamic Scenes"
http://garanzha.com/Documents/UPTC-ART-DS-8-600dpi.pdf
I'm also using the pluecker-test to determine fast (not thaat accurate, but well, you can't have all) if i hit a polygon, works very pretty with SSE and above.
So my conclusion is that there are so many great papers out there about so much Topics that do relate to raytracing (How to build fast, efficient trees and how to shade (BRDF models) and so on and so on), it is an realy amazing and interesting field of "experimentating", but you need to have also much sparetime because it is so damn complicated but funny.
My first question is - are you trying to optimize the tracing of one single still screen,
or is this about optimizing the tracing of multiple screens in order to calculate an animation ?
Optimizing for a single shot is one thing, if you want to calculate successive frames in an animation there are lots of new things to think about / optimize.
You could
use an SAH-optimized bounding volume hierarchy...
...eventually using packet traversal,
introduce importance sampling,
access the tiles ordered by Morton code for better cache coherency, and
much more - but those were the suggestions I could immediately think of. In more words:
You can build an optimized hierarchy based on statistics in order to quickly identify candidate nodes when intersecting geometry. In your case you'll have to combine the automatic hierarchy with the modeling hierarchy, that is either constrain the build or have it eventually clone modeling information.
"Packet traversal" means you use SIMD instructions to compute 4 parallel scalars, each of an own ray for traversing the hierarchy (which is typically the hot spot) in order to squeeze the most performance out of the hardware.
You can perform some per-ray-statistics in order to control the sampling rate (number of secondary rays shot) based on the contribution to the resulting pixel color.
Using an area curve on the tile allows you to decrease the average space distance between the pixels and thus the probability that your performance benefits from cache hits.

Resources