Optimization ideas : apply a LUT (lookup table) on an image - image

I currently work on some project using LUTs to modify colors of images.
My problem is that my program is not optimized...
What my program does:
* Opens a LUT file (.cube) and stores the values in the memory
* On each pixel of the image, trilinear interpolation is used to change the colors using the LUT
What I've tried:
* Downscaling the image, but the process still takes so much time...
How do programs such as Premiere pro or Davinci Resolve can apply a LUT to a footage and read it at 24fps? My program takes 10s to apply a LUT on a jpg/DNG file !

The most efficient way to do this would be in the GPU, which can do many simple interpolation and lookup instructions simultaneously on many pixels.
This article: https://developer.nvidia.com/gpugems/GPUGems2/gpugems2_chapter24.html
describes the algorithm for you, and it's simple enough it's trivial to port it over to OpenGL or another GPU scripting language:
void main(in float2 sUV : TEXCOORD0,
out half4 cOut : COLOR0,
const uniform samplerRECT imagePlane,
const uniform sampler3D lut,
const uniform float3 lutSize)
{
// get raw RGB pixel values
half3 rawColor = texRECT(imagePlane, sUV).rgb;
// calculate scale and offset values
half3 scale = (lutSize - 1.0) / lutSize;
half3 offset = 1.0 / (2.0 * lutSize);
// apply the LUT
cOut.rgb = tex3D(lut, scale * rawColor + offset);
}
Outside of that you will have to load the LUT as a uniform array into the GPU with your application code, and then stream every video frame into the GPU so it can pass it through your fragment shader in a render/work loop. This is most likely what professional video editing programs do, in order to apply LUTs with realtime video constraints.
P.S. harold's comment about precalculating the lookup entries is also a valid way to speed your process up, making the operation purely a memory access with the lookup. It'll still probably be orders of magnitude less efficient than GPU processing because of how much slower CPU memory access is compared to what the GPU does, and it's very memory inefficient, depending on the system you do it on and the dimensionality and size of your LUT.
For example, let's say you want to make the 'full' 3D LUT for 24-bit RGB. That means your final cube needs to have an edge of size 255, meaning your final size is this: 255^3 * 3 (RGB) * 2 (float) bytes, for a total of nearly 100MB. Obviously if it is just a 1D LUT this might not be an issue, or with lower color bit-depth, however this method is still inefficient compared to letting the GPU handle the interpolation for you.

Related

Optimizing texture fetches with higher mip levels

Let's say I have some shader program in DirectX or OpenGL rendering a full screen quad. And in a pixel/fragment shader I sample some huge textures at random texture coordinates. That is one same texture coordinate for all texture samplings in one shader invocation, but it is various among different shader invocations. These fetch operations produce performance drop, I even think that due to the size of the textures the GPU texture cache is not big enough and is used not efficiently.
Now I have a theoretical question: can I optimize the performance by using some low-resolution like 32x32 mask textures, which are built by mipmapping the large textures, and if a value in a mask texture at given texture coordinate at some higher mip level is not appropriate, then I don't need to perform texture fetches at full-size level 0? Something like this in HLSL (GLSL code is pretty similar, but there is no [branch] attribute):
float2 tc = calculateTexCoordinates();
bool performHeavyComputations = testValue(largeMipmappedTexture.SampleLevel(sampler, tc, 5));
float result = 0;
[branch]
if (performHeavyComputations)
{
result += largeMipmappedTexture.SampleLevel(sampler, tc, 0);
}
About 50% of texels at mip level 5 will not pass the test. And so a lot of shader invocations should not sample the full-size textures.
But I am introducing branching in the code. May this branching hurt the performance even worse than sampling the full-size texture even if that is not needed? Different GPUs may behave differently, some may not even support branching, will they perform two fetches instead of one?
I can test this code on some machines later, but my question is theoretical.
And can you suggest another optimizations, if this won't work properly ?

What is the expected renderscript rsSample() performance?

rsSample() seems intended for sampling from mipmapped textures across multiple levels of detail to avoid aliasing. The fisheye example would be a good use case.
The implementation simply samples 8 pixels from the underlying mipmap and does a linear blend of them.
It seems I can only get 15 Mpixels/second on a Google Pixel with this simple kernel:
uchar4 __attribute__((kernel)) rescaletest(uint32_t x, uint32_t y) {
float2 location = {x,y};
return convert_uchar4(rsSample(gInput8888, gWrapLinearMipLinear, location/2000.f, 1.5f)*255.f);
}
Considering all composited graphics presumably use mipmaps, and to composite even one texture at 60fps needs 120Mpixels/sec, what am I doing wrong?

Summed area table in GLSL and GPU fragment shader execution

I am trying to compute the integral image (aka summed area table) of a texture I have in the GPU memory (a camera capture), the goal being to compute the adaptive threshold of said image. I'm using OpenGL ES 2.0, and still learning :).
I did a test with a simple gaussian blur shader (vertical/horizontal pass), which is working fine, but I need a way bigger variable average area for it to give satisfactory results.
I did implement a version of that algorithm on CPU before, but I'm a bit confused on how to implement that on a GPU.
I tried to do a (completely incorrect) test with just something like this for every fragment :
#version 100
#extension GL_OES_EGL_image_external : require
precision highp float;
uniform sampler2D u_Texture; // The input texture.
varying lowp vec2 v_TexCoordinate; // Interpolated texture coordinate per fragment.
uniform vec2 u_PixelDelta; // Pixel delta
void main()
{
// get neighboring pixels values
float center = texture2D(u_Texture, v_TexCoordinate).r;
float a = texture2D(u_Texture, v_TexCoordinate + vec2(u_PixelDelta.x * -1.0, 0.0)).r;
float b = texture2D(u_Texture, v_TexCoordinate + vec2(0.0, u_PixelDelta.y * 1.0)).r;
float c = texture2D(u_Texture, v_TexCoordinate + vec2(u_PixelDelta.x * -1.0, u_PixelDelta.y * 1.0)).r;
// compute value
float pixValue = center + a + b - c;
// Result stores value (R) and original gray value (G)
gl_FragColor = vec4(pixValue, center, center, 1.0);
}
And then another shader to get the area that I want and then get the average. This is obviously wrong as there's multiple execution units operating at the same time.
I know that the common way of computing a prefix sum on a GPU is to do it in two pass (vertical/horizontal, as discussed here on this thread or or here), but isn't there a problem here as there is a data dependency on each cell from the previous (top or left) one ?
I can't seem to understand the order in which the multiple execution units on a GPU will process the different fragments, and how a two-pass filter can solve that issue. As an example, if I have some values like this :
2 1 5
0 3 2
4 4 7
The two pass should give (first columns then rows):
2 1 5 2 3 8
2 4 7 -> 2 6 13
6 8 14 6 14 28
How can I be sure that, as an example, the value [0;2] will be computed as 6 (2 + 4) and not 4 (0 + 4, if the 0 hasn't been computed yet) ?
Also, as I understand that fragments are not pixels (If I'm not mistaken), would the values I store back in one of my texture in the first pass be the same in another pass if I use the exact same coordinates passed from the vertex shader, or will they be interpolated in some way ?
Tommy and Bartvbl address your questions about a summed-area table, but your core problem of an adaptive threshold may not need that.
As part of my open source GPUImage framework, I've done some experimentation with optimizing blurs over large radii using OpenGL ES. Generally, increasing blur radii leads to a significant increase in texture sampling and calculations per pixel, with an accompanying slowdown.
However, I found that for most blur operations you can apply a surprisingly effective optimization to cap the number of blur samples. If you downsample the image before blurring, blur at a smaller pixel radius (radius / downsampling factor), and then linearly upsample, you can arrive at a blurred image that is the equivalent of one blurred at a much larger pixel radius. In my tests, these downsampled, blurred, and then upsampled images look almost identical to the ones blurred based on the original image resolution. In fact, precision limits can lead to larger-radii blurs done at a native resolution breaking down in image quality past a certain size, where the downsampled ones maintain the proper image quality.
By adjusting the downsampling factor to keep the downsampled blur radius constant, you can achieve near constant-time blurring speeds in the face of increasing blur radii. For a adaptive threshold, the image quality should be good enough to use for your comparisons.
I use this approach in the Gaussian and box blurs within the latest version of the above-linked framework, so if you're running on Mac, iOS, or Linux, you can evaluate the results by trying out one of the sample applications. I have an adaptive threshold operation based on a box blur that uses this optimization, so you can see if the results there are what you want.
AS per the above, it's not going to be fantastic on a GPU. But assuming the cost of shunting data between the GPU and CPU is more troubling it may still be worth persevering.
The most obvious prima facie solution is to split horizontal/vertical as discussed. Use an additive blending mode, create a quad that draws the whole source image then e.g. for the horizontal step on a bitmap of width n issue a call that requests the quad be drawn n times, the 0th time at x = 0, the mth time at x = m. Then ping pong via an FBO, switching the target of buffer of the horizontal draw into the source texture for the vertical.
Memory accesses are probably O(n^2) (i.e. you'll probably cache quite well, but that's hardly a complete relief) so it's a fairly poor solution. You could improve it by divide and conquer by doing the same thing in bands — e.g. for the vertical step, independently sum individual rows of 8, after which the error in every row below the final is the failure to include whatever the sums are on that row. So perform a second pass to propagate those.
However an issue with accumulating in the frame buffer is clamping to avoid overflow — if you're expecting a value greater than 255 anywhere in the integral image then you're out of luck because the additive blending will clamp and GL_RG32I et al don't reach ES prior to 3.0.
The best solution I can think of to that, without using any vendor-specific extensions, is to split up the bits of your source image and combine channels after the fact. Supposing your source image were 4 bit and your image less than 256 pixels in both directions, you'd put one bit each in the R, G, B and A channels, perform the normal additive step, then run a quick recombine shader as value = A + (B*2) + (G*4) + (R*8). If your texture is larger or smaller in size or bit depth then scale up or down accordingly.
(platform specific observation: if you're on iOS then you've hopefully already got a CVOpenGLESTextureCache in the loop, which means you have CPU and GPU access to the same texture store, so you might well prefer to kick this step off to GCD. iOS is amongst the platforms supporting EXT_shader_framebuffer_fetch; if you have access to that then you can write any old blend function you like and at least ditch the combination step. Also you're guaranteed that preceding geometry has completed before you draw so if each strip writes its totals where it should and also to the line below then you can perform the ideal two-pixel-strips solution with no intermediate buffers or state changes)
What you attempt to do cannot be done in a fragment shader. GPU's are by nature very different to CPU's by executing their instructions in parallel, in massive numbers at the same time. Because of this, OpenGL does not make any guarantees about execution order, because the hardware physically doesn't allow it to.
So there is not really any defined order other than "whatever the GPU thread block scheduler decides".
Fragments are pixels, sorta-kinda. They are pixels that potentially end up on screen. If another triangle ends up in front of another, the previous calculated colour value is discarded. This happens regardless of whatever colour was stored at that pixel in the colour buffer previously.
As for creating the summed area table on the GPU, I think you may first want to look at GLSL "Compute Shaders", which are specifically made for this sort of thing.
I think you may be able to get this to work by creating a single thread for each row of pixels in the table, then have every thread "lag behind" by 1 pixel compared to the previous row.
In pseudocode:
int row_id = thread_id()
for column_index in (image.cols + image.rows):
int my_current_column_id = column_index - row_id
if my_current_column_id >= 0 and my_current_column_id < image.width:
// calculate sums
The catch of this method is that all threads should be guaranteed to execute their instructions simultaneously without getting ahead of one another. This is guaranteed in CUDA, but I'm not sure whether it is in OpenGL compute shaders. It may be a starting point for you, though.
It may look surprising for the beginner but the prefix sum or SAT calculation is suitable for parallelization. As the Hensley algorithm is the most intuitive to understand (also implemented in OpenGL), more work-efficient parallel methods are available, see CUDA scan. The paper from Sengupta discuss parallel method which seems state-of-the-art efficient method with reduce and down swap phases. These are valuable materials but they do not enter OpenGL shader implementations in detail. The closest document is the presentation you have found (it refers to Hensley publication), since it has some shader snippets. This is the job which is doable entirely in fragment shader with FBO Ping-Pong. Note that the FBO and its texture need to have internal format set to high precision - GL_RGB32F would be best but I am not sure if it is supported in OpenGL ES 2.0.

Scaling Laplacian of Gaussian Edge Detection

I am using Laplacian of Gaussian for edge detection using a combination of what is described in http://homepages.inf.ed.ac.uk/rbf/HIPR2/log.htm and http://wwwmath.tau.ac.il/~turkel/notes/Maini.pdf
Simply put, I'm using this equation :
for(int i = -(kernelSize/2); i<=(kernelSize/2); i++)
{
for(int j = -(kernelSize/2); j<=(kernelSize/2); j++)
{
double L_xy = -1/(Math.PI * Math.pow(sigma,4))*(1 - ((Math.pow(i,2) + Math.pow(j,2))/(2*Math.pow(sigma,2))))*Math.exp(-((Math.pow(i,2) + Math.pow(j,2))/(2*Math.pow(sigma,2))));
L_xy*=426.3;
}
}
and using up the L_xy variable to build the LoG kernel.
The problem is, when the image size is larger, application of the same kernel is making the filter more sensitive to noise. The edge sharpness is also not the same.
Let me put an example here...
Suppose we've got this image:
Using a value of sigma = 0.9 and a kernel size of 5 x 5 matrix on a 480 × 264 pixel version of this image, we get the following output:
However, if we use the same values on a 1920 × 1080 pixels version of this image (same sigma value and kernel size), we get something like this:
[Both the images are scaled down version of an even larger image. The scaling down was done using a photo editor, which means the data contained in the images are not exactly similar. But, at least, they should be very near.]
Given that the larger image is roughly 4 times the smaller one... I also tried scaling the sigma by factor of 4 (sigma*=4) and the output was... you guessed it right, a black canvas.
Could you please help me realize how to implement a LoG edge detector that finds the same features from an input signal, even if the incoming signal is scaled up or down (scaling factor will be given).
Looking at your images, I suppose you are working in 24-bit RGB. When you increase your sigma, the response of your filter weakens accordingly, thus what you get in the larger image with a larger kernel are values close to zero, which are either truncated or so close to zero that your display cannot distinguish.
To make differentials across different scales comparable, you should use the scale-space differential operator (Lindeberg et al.):
Essentially, differential operators are applied to the Gaussian kernel function (G_{\sigma}) and the result (or alternatively the convolution kernel; it is just a scalar multiplier anyways) is scaled by \sigma^{\gamma}. Here L is the input image and LoG is Laplacian of Gaussian -image.
When the order of differential is 2, \gammais typically set to 2.
Then you should get quite similar magnitude in both images.
Sources:
[1] Lindeberg: "Scale-space theory in computer vision" 1993
[2] Frangi et al. "Multiscale vessel enhancement filtering" 1998

Why is a Sprite Batcher faster?

I am reading Beginning Android Games (Mario Zechner) at the moment.
While reading about 2D games with OpenGL ES 1.0 the author introduces the concept of the SpriteBatcher that takes for each sprite it shall render the coordinates and an angle. The SpriteBatcher then calculates the final coordinates of the sprite rectangle and puts that into a single big buffer.
In the render method the SpriteBatcher sets the state for all the sprites once (texture, blending, vertex buffer, texture coordinates buffer). All sprites use the same texture but not the same texture coordinates.
The advantages of this behavior are:
The rendering pipeline does not stall, since there are no state changes while rendering all the sprites.
There are less OpenGL calls. (= less JNI overhead)
But I see a major disadvantage:
For rotation the CPU has to calculate the sine and cosine and perform 16 multiplication for each sprite. As far as I know calculating sine and cosine is very expensive and slow.
But the SpriteBatcher approach is lots faster than using lots of glRotate/glTranslate for rendering the sprites one by one.
Finally my questions:
Why is it faster? Are OpenGL state changes really that expensive?
The GPU is optimized for vector multiplications and rotations, while the CPU is not. Why doesn't that matter?
Would one use a SpriteBatcher on a desktop with a dedicated GFX-card?
Is there a point where the SpriteBatcher becomes inefficient?
But I see a major disadvantage:
For rotation the CPU has to calculate the sine and cosine and perform 16 multiplication for each sprite. As far as I know calculating sine and cosine is very expensive and slow.
Actually sin and cos are quite fast, on modern architectures they take 1 clock cycle to execute, if the pipeline has not been stalled before. However if the each sprite is rotated individually and an ordinary frustum perspective projection is used, the author of this code doesn't know his linear algebra.
The whole task can be simplified a lot if one recalls, that the modelview matrix maps linear local/world coordinates map to eye space. The rotation is in the upper left 3×3 submatrix, the column forming the local base vectors. By taking the inverse of this submatrix you're given exactly those vectors you need as sprite base, to map planar into eye space. In case of only rotations (and scaling, maybe) applied, the inverse of the upper left 3×3 is the transpose; so by using the upper left 3×3 rows as the sprite base you get that effect without doing any trigonometry at all:
/* populates the currently bound VBO with sprite geometry */
void populate_sprites_VBO(std::vector<vec3> sprite_positions)
{
GLfloat mv[16];
GLfloat sprite_left[3];
GLfloat sprite_up[3];
glGetMatrixf(GL_MODELVIEW_MATRIX, mv);
for(int i=0; i<3; i++) {
sprite_left[i] = mv[i*4];
sprite_up[i] = mv[i*4 + 4];
}
std::vector<GLfloat> sprite_geom;
for(std::vector<vec3>::iterator sprite=sprite_positions.begin(), end=sprite_positions.end();
sprite != end;
sprite++ ){
sprite_geom.append(sprite->x + (-sprite_left[0] - sprite_up[0])*sprite->scale);
sprite_geom.append(sprite->y + (-sprite_left[1] - sprite_up[1])*sprite->scale);
sprite_geom.append(sprite->z + (-sprite_left[2] - sprite_up[2])*sprite->scale);
sprite_geom.append(sprite->x + ( sprite_left[0] - sprite_up[0])*sprite->scale);
sprite_geom.append(sprite->y + ( sprite_left[1] - sprite_up[1])*sprite->scale);
sprite_geom.append(sprite->z + ( sprite_left[2] - sprite_up[2])*sprite->scale);
sprite_geom.append(sprite->x + ( sprite_left[0] + sprite_up[0])*sprite->scale);
sprite_geom.append(sprite->y + ( sprite_left[1] + sprite_up[1])*sprite->scale);
sprite_geom.append(sprite->z + ( sprite_left[2] + sprite_up[2])*sprite->scale);
sprite_geom.append(sprite->x + (-sprite_left[0] + sprite_up[0])*sprite->scale);
sprite_geom.append(sprite->y + (-sprite_left[1] + sprite_up[1])*sprite->scale);
sprite_geom.append(sprite->z + (-sprite_left[2] + sprite_up[2])*sprite->scale);
}
glBufferData(GL_ARRAY_BUFFER,
sprite_positions.size() * sizeof(sprite_positions[0]), &sprite_positions[0],
GL_DRAW_STREAM);
}
If shaders are available, then instead of rebuilding the sprite data on CPU each frame, one could use the geometry shader or the vertex shader. A geometry shader would take a vector of position, scale, texture, etc. and emit the quads. Using a vertex shader you'd send a lot of [-1,1] quads, where each vertex would carry the center position of the sprite it belongs to as an additional vec3 attribute.
Finally my questions:
Why is it faster? Are OpenGL state changes really that expensive?
Some state changes are extremely expensive, you'll try to avoid those, wherever possible. Switching textures is very expensive, switching shaders is mildly expensive.
The GPU is optimized for vector multiplications and rotations, while the CPU is not. Why doesn't that matter?
This is not the difference between GPU and CPU. Where a GPU differs from a CPU is, that it performs the same sequence of operations on a huge chunk of records in parallel (each pixel of the framebuffer rendered to). A CPU on the other hand runs the program one record at a time.
But CPUs do vector operations just as well, if not even better than GPUs. Especially where precision matters CPUs are still preferred over GPUs. MMX, SSE and 3DNow! are vector math instruction sets.
Would one use a SpriteBatcher on a desktop with a dedicated GFX-card?
Probably not in this form, since today one has geometry and vertex shaders available, liberating the CPU for other things. But more importantly this saves bandwidth between CPU and GPU. Bandwidth is the tighter bottleneck, processing power is not the number one problem these days (of course one never has enough processing power).
Is there a point where the SpriteBatcher becomes inefficient?
Yes, namely the CPU → GPU transfer bottleneck. Today one uses geometry shaders and instancing to do this kind of thing, really fast.
I don't know about SpriteBatcher, but looking at the information you provided here are my thoughts:
It is faster, because it uses less state changes and, what is more important, less draw calls. Mobile platforms have especially strict constraints on draw call number per frame.
That doesn't matter because, probably, they are using CPU for rotations. I, personally, see no reason not to use GPU for that, which would be way faster and nullify bandwidth load.
I guess it would still be a good optimization considering point 1.
I can mind two extreme cases: when there are too few sprites or when the compound texture (containing all rotated sprites) grows too big (mobile devices have lower size limits).

Resources