Are there noise functions in GLSL OpenGL ES 2.0 (iOS)? - opengl-es

Or any counterpart?
How can I generate a cheap random number?

GLSL ES doesn't come with noise functions, and the desktop GLSL noise functions are almost never implemented.
However, there are some freeware noise functions available. They're supposed to be pretty decent and fast. I've never used them myself, but they should work. It's MIT-licensed code, if you're worried about that.

Define "cheap".
The way random numbers work in computers is, they're not really random. You start with a number (the seed), and for each random number you want you do some fancy looking calculations on that number to get another number which looks random, and you use that number as your random number and the seed for the next random number. See here for the gory details.
Problem is, that procedure is inherently sequential, which is no good for shaders.
You could theoretically write a function in a fragment shader that makes some hash out of, say, the fragment position and potentially some uniform int that is incremented every frame, but that is an awful lot of work for a fragment shader, just to produce something that looks like noise.
The conventional technique for producing noise effects in OpenGL is to create a noisy texture and have the shader(s) use it in various ways. You could simply apply the texture as a standard texture to your surface, or you could stretch it or clamp its color values. For time-varying effects you might want to use a 3D texture, or have a larger 2D texture and pass a random texture coordinate offset to the fragment shader stage each frame via a uniform.
Also have a look at perlin noise, which essentially uses a variation of the effect described above.

Related

Precalculating OpenGL model transformations for static world space

I'm working on an OpenGL visualisation for navigating a 3D dataset. Briefly, the visualisation takes in a large (~1 million data points) array of matrices, which are then eigendecomposed and visualised as ellipsoids.
I have found that performance improves significantly when I calculate ellipsoid vertex transformations "up-front" (i.e. calculate all model transformations once only on the CPU), rather than in shaders (where the model transformations have to be calculated for each draw). For scene navigation/lighting etc., view and projection tranformations are calculated as normal as uniforms passed to the relevant shaders.
The result of this approach is the program taking longer to initialise (due to the CPU being tied up calculating all the model transformations), but significantly higher frame rates.
I understand from this, that it is common to decompose matrices to avoid unnecessary shader computations, however I haven't come across anything describing this practice of completely pre-calculating the world space.
I understand that this approach is only appropriate for my narrow usecase (i.e. where the scene is static, meaning there will never be a situation where a vertex's position in world space will change while the program is running). Apart from that, are there any significant reasons that I should avoid doing this?
It's a common optimization to remove redundant transformations from static objects. Your objects are static in the world, so you've collapsed all the redundant transformations right up to the root of your scene, which is not a problem.
Having said that, the performance gain you're seeing is probably not coming from the cost of doing the model transform in the shader, but from passing that transform to the shader for each object. You have not said much about how you organize the ellipsoids, but if you are updating a program with the model matrix uniform and issuing a DrawElements call for each ellipsoid, that is very slow indeed. Even doing something more exotic -- like using instances and passing each transform in a VBO -- you would still have the overhead of updating them,which you can now avoid. If you are not doing this already, you can group your ellipsoid vertices into large arrays and draw them with only a few DrawElements calls.

Efficiently rendering a transparent terrain in OpenGL

I'm writing an OpenGL program that visualizes caves, so when I visualize the surface terrain I'd like to make it transparent, so you can see the caves below. I'm assuming I can normalize the data from a Digital Elevation Model into a grid aligned to the X/Z axes with regular spacing, and render each grid cell as two triangles. With an aligned grid I could avoid the cost of sorting when applying the painter's algorithm (to ensure proper transparency effects); instead I could just render the cells row by row, starting with the farthest row and the farthest cell of each row.
That's all well and good, but my question for OpenGL experts is, how could I draw the terrain most efficiently (and in a way that could scale to high resolution terrains) using OpenGL? There must be a better way than calling glDrawElements() once for every grid cell. Here are some ways I'm thinking about doing it (they involve features I haven't tried yet, that's why I'm asking the experts):
glMultiDrawElements Idea
Put all the terrain coordinates in a vertex buffer
Put all the coordinate indices in an element buffer
To draw, write the starting indices of each cell into an array in the desired order and call glMultiDrawElements with that array.
This seems pretty good, but I was wondering if there was any way I could avoid transferring an array of indices to the graphics card every frame, so I came up with the following idea:
Uniform Buffer Idea
This seems like a backward way of using OpenGL, but just putting it out there...
Put the terrain coordinates in a 2D array in a uniform buffer
Put coordinate index offsets 0..5 in a vertex buffer (they would have to be floats, I know)
call glDrawArraysInstanced - each instance will be one grid cell
the vertex shader examines the position of the camera relative to the terrain and determines how to order the cells, mapping gl_instanceId to the index of the first coordinate of the cell in the Uniform Buffer, and setting gl_Position to the coordinate at this index + the index offset attribute
I figure there might be shiny new OpenGL 4.0 features I'm not aware of that would be more elegant than either of these approaches. I'd appreciate any tips!
The glMultiDrawElements() approach sounds very reasonable. I would implement that first, and use it as a baseline you can compare to if you try more complex approaches.
If you have a chance to make it faster will depend on whether the processing of draw calls is an important bottleneck in your rendering. Unless the triangles you render are very small, and/or your fragment shader very simple, there's a good chance that you will be limited by fragment processing anyway. If you have profiling tools that allow you to collect data and identify bottlenecks, you can be much more targeted in your optimization efforts. Of course there is always the low-tech approach: If making the window smaller improves your performance, chances are that you're mostly fragment limited.
Back to your question: Since you asked about shiny new GL4 features, another method you could check out is indirect rendering, using glDrawElementsIndirect(). Beyond being more flexible, the main difference to glMultiDrawElements() is that the parameters used for each draw, like the start index in your case, can be sourced from a buffer. This might prevent one copy if you map this buffer, and write the start indices directly to the buffer. You could even combine it with persistent buffer mapping (look up GL_MAP_PERSISTENT_BIT) so that you don't have to map and unmap the buffer each time.
Your uniform buffer idea sounds pretty interesting. I'm slightly skeptical that it will perform better, but that's just a feeling, and not based on any data or direct experience. So I think you absolutely should try it, and report back on how well it works!
Stretching the scope of your question some more, you could also look into approaches for order-independent transparency rendering if you haven't considered and rejected them already. For example alpha-to-coverage is very easy to implement, and almost free if you would be using MSAA anyway. It doesn't produce very high quality transparency effects based on my limited attempts, but it could be very attractive if it does the job for your use case. Another technique for order-independent transparency is depth peeling.
If some self promotion is acceptable, I wrote an overview of some transparency rendering methods in an earlier answer here: OpenGL ES2 Alpha test problems.

Rendering realistic electric lightning using OpenGl

I'm implementing a simple lightning effect for my 3D game, something like this:
http://www.krazydad.com/bestiary/bestiary_lightning.html
I'm using opengl ES 2.0. I'm pondering what the best looking and most performance efficient way to render this in a 3D environment is though, as the lines making up the electric bolt needs to be looking "solid" when viewed from any angle.
I was thinking to generate two planes for each line segment, in an X cross to create an effect of line thickness. Rendering by disabling depth buffer writes, using some kind off additive blending mode. Texturing each line segment using an electric looking texture with an alpha channel.
I'm a bit worried about the performance hit from generating the necessary triangle lists using this method though, as my game will potentially have a lot of lightning bolts generated at the same time. But as the length and thickness of the lightning bolts will vary a lot, I doubt it would look good to simply use an animated 3D object of an lightning bolt, stretched and pointing to the right location, which was my initial idea.
I was thinking of an alternative approach where I render the lightning bolts using 2D lines between projected end points in a post processing pass. That should work well since the perspective effect in my case is negligible, except then it would be tricky to have the lines appear behind occluding objects.
Any good ideas on the best approach here?
Edit: I found this white paper from nVidia:
http://developer.download.nvidia.com/SDK/10/direct3d/Source/Lightning/doc/lightning_doc.pdf
Which uses an approach with having billboards for each line segment, then apply some filtering to smooth the resulting gaps and overlaps from each billboard.
Seems to yield pretty good visual results, however I am not too happy about the additional filtering pass as the game is for mobile phones where such a step is quite costly. And, as it turns out, billboarding is quite CPU expensive too, due to the additional matrix calculation overhead, which is slow on mobile devices.
I ended up doing something like the nVidia paper suggested, but to prevent the need for a postprocessing step I used different kind of textures for different kind of branching angles, to avoid gaps and overlaps of the segment corners, which turned out quite well. And to avoid the expensive billboard matrix calculation I instead drew the line segments using a more 2D approach, but calculating the depth value manually for each vertex in the segments. This yields both acceptable performance and visuals.
An animated texture, possibly powered by a shader, is likely the fastest way to handle this.
Any geometry generation and rendering will limit the quality of the effect, and may take significantly more CPU time, memory bandwidth and draw calls.
Using a single animated texture on a quad, or a shader creating procedural lightning, will give constant speed and make the effect much simpler to implement. For that, this question may be of interest.

Fastest way to to take coordinates from model space, to canonical coordinates space in OpenGL ES 2.0

Like many 3d graphical programs, I have a bunch of objects that have their own model coordinates (from -1 to 1 in x, y, and z axis). Then, I have a matrix that takes it from model coordinates to world coordinates (using the location, rotation, and scale of the object being drawn). Finally, I have a second matrix to turn those world coordinates into canonical coordinates that OopenGL ES 2.0 will use to draw to the screen.
So, because one object can contain many vertices, all of which use the same transform into both world space, and canonical coordinates, it's faster to calculate the product of those two matrices once, and put each vertex through the resulting matrix, rather than putting each vertex through both matrices.
But, as far as I can tell, there doesn't seem to be a way in OpenGL ES 2.0 shaders to have it calculate the matrix once, and keep using it until the one of the two matrices used until glUniformMatrix4fv() (or another function to set a uniform) is called. So it seems like the only way to calculate the matrix once would be to do it on the CPU, and then result to the GPU using a uniform. Otherwise, when something like:
gl_Position = uProjection * uMV * aPosition;
it will calculate it over and over again, which seems like it would waste time.
So, which way is usually considered standard? Or is there a different way that I am completely missing? As far as I could tell, the shader used to implement the OpenGL ES 1.1 pipeline in the OpenGL ES 2.0 Programming Guide only used one matrix, so is that used more?
First, the correct OpenGL term for "canonical coordinates" is clip space.
Second, it should be this:
gl_Position = uProjection * (uMV * aPosition);
What you posted does a matrix/matrix multiply followed by a matrix/vector multiply. This version does 2 matrix/vector multiplies. That's a substantial difference.
You're using shader-based hardware; how you handle matrices is up to you. There is nothing that is "considered standard"; you do what you best need to do.
That being said, unless you are doing lighting in model space, you will often need some intermediary between model space and 4D homogeneous clip-space. This is the space you transform the positions and normals into in order to compute the light direction, dot(N, L), and so forth.
Personally, I wouldn't suggest world space for reasons that I explain thoroughly here. But whether it's world space, camera space, or something else, you will generally have some intermediate space that you need positions to be in. At which point, the above code becomes necessary, and thus there is no time wasted.

Where can I find a good read about bicubic interpolation and Lanczos resampling?

I want to implement the two above mentioned image resampling algorithms (bicubic and Lanczos) in C++. I know that there are dozens of existing implementations out there, but I still want to make my own. I want to make it partly because I want to understand how they work, and partly because I want to give them some capabilities not found in mainstream implementations (like configurable multi-CPU support and progress reporting).
I tried reading Wikipedia, but the stuff is a bit too dry for me. Perhaps there are some nicer explanations of these algorithms? I couldn't find anything either on SO or Google.
Added: Seems like nobody can give me a good link about these topics. Can anyone at least try to explain them here?
The basic operation principle of both algorithms is pretty simple. They're both convolution filters. A convolution filter that for each output value moves the convolution functions point of origin to be centered on the output and then multiplies all the values in the input with the value of the convolution function at that location and adds them together.
One property of convolution is that the integral of the output is the product of the integrals of the two input functions. If you consider the input and output images, then the integral means average brightness and if you want the brightness to remain the same the integral of the convolution function needs to add up to one.
One way how to understand them is to think of the convolution function as something that shows how much input pixels influence the output pixel depending on their distance.
Convolution functions are usually defined so that they are zero when the distance is larger than some value so that you don't have to consider every input value for every output value.
For lanczos interpolation the convolution function is based on the sinc(x) = sin(x*pi)/x function, but only the first few lobes are taken. Usually 3:
lanczos(x) = {
0 if abs(x) > 3,
1 if x == 0,
else sin(x*pi)/x
}
This function is called the filter kernel.
To resample with lanczos imagine you overlay the output and input over eachother, with points signifying where the pixel locations are. For each output pixel location you take a box +- 3 output pixels from that point. For every input pixel that lies in that box, calculate the value of the lanczos function at that location with the distance from the output location in output pixel coordinates as the parameter. You then need to normalize the calculated values by scaling them so that they add up to 1. After that multiply each input pixel value with the corresponding scaling value and add the results together to get the value of the output pixel.
Because lanzos function has the separability property and, if you are resizing, the grid is regular, you can optimize this by doing the convolution horizontally and vertically separately and precalculate the vertical filters for each row and horizontal filters for each column.
Bicubic convolution is basically the same, with a different filter kernel function.
To get more detail, there's a pretty good and thorough explanation in the book Digital Image Processing, section 16.3.
Also, image_operations.cc and convolver.cc in skia have a pretty well commented implementation of lanczos interpolation.
While what Ants Aasma says roughly describes the difference, I don't think it is particularly informative as to why you might do such a thing.
As far as links go, you are asking a very basic question in image processing, and any decent introductory textbook on the subject will describe this. If I remember correctly, Gonzales and Woods is decent on it, but I'm away from my books and can't check.
Now on to the particulars, it should help to think about what you are doing fundamentally. You have a square lattice of measurements that you want to interpolate new values for. In the simple case of upsampling, lets imagine you want a new measurement in between every one that you already have (e.g. double the resolution).
Now you won't get the "correct" value, because in general you don't have that information. So you have to estimate it. How to do this? A very simple way would be to linearly interpolate. Everyone knows how to do this with two points, you just draw a line between them, and read the new value off the line (in this case, at the half way point).
Now an image is two dimensional, so you really want to do this in both the left-right and up-down directions. Use the result for your estimate and voila you have "bilinear" interpolation.
The main problem with this is that it isn't very accurate, although it's better (and slower) than the "nearest neighbor" approach which is also very local and fast.
To address the first problem, you want something better than a linear fit of two points, you want to fit something to more data points (pixels), and something that can be nonlinear. A good trade off on accuracy and computational cost is something called a cubic spline. So this will give you a smooth fit line, and again you approximate your new "measurement" by the value it takes in the middle. Do this in both directions and you've got "bicubic" interpolation.
So that's more accurate, but still heavy. One way to address the speed issue is to use a convolution, which has the nice property that in the Fourier domain, it's just a multiplication, so we can implement it quite quickly. But you don't need to worry about the implementation to understand that the convolution result at any point is one function (your image) being integrated in product another, typically much smaller support (the part that is non-zero) function called the kernel), after that kernel has been centered over that particular point. In the discrete world, these are just sums of the products.
It turns out that you can design a convolution kernel that has properties quite like the cubic spline, and use that to get a fast "bicubic"
Lancsoz resampling is a similar thing, with slightly different properties in the kernel, which primarily means they will have different characteristic artifacts. You can look up the details of these kernel functions easily enough (I'm sure wikipedia has them, or any intro text). The implementations used in graphics programs tend to be highly optimized and sometimes have specialized assumptions which make them more efficient but less general.
I would like suggest the following article for a basic understanding of different image interpolation methods image interpolation via convolution. If you want to try more interpolation methods, the imageresampler is a nice open source project to begin with.
In my opinion image interpolation can be understood from two aspects, one is from function fitting perspective, and one is from convolution perspective. For example, the spline interpolation explained in image interpolation via convolution is well explained from function fitting perspective in Cubic interpolation.
Additionally, image interpolation is always related to a specific application, for example image zooming, image rotation and so on. In fact for a specific application, image interpolation can be implemented i.n a smart way. For example, image rotation can be implemented via a three-shearing method, and during each shearing operation different one-dimension interpolation algorithms can be implemented.

Resources