How do the permutation and gradient tables of Perlin and Simplex Noise work in practice? - perlin-noise

So I have been doing a bit of research into how Perlin and Simplex noise work and, while I get the core principles of regular Perlin noise I'm a little bit confused about how the permutation and gradient tables work.
From my understanding, they provide better performance than a seeded random number generator as they are tables of pre-computed values that are nicely indexed for quick access.
What I don't entirely get though is how they work practically. I've seen a permutation table implemented as a array of the shuffled values from 0-255 like so:
permutation[] = { 151,160,137,91,90,15,
131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10,23,
190, 6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33,
88,237,149,56,87,174,20,125,136,171,168, 68,175,74,165,71,134,139,48,27,166,
77,146,158,231,83,111,229,122,60,211,133,230,220,105,92,41,55,46,245,40,244,
102,143,54, 65,25,63,161, 1,216,80,73,209,76,132,187,208, 89,18,169,200,196,
135,130,116,188,159,86,164,100,109,198,173,186, 3,64,52,217,226,250,124,123,
5,202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42,
223,183,170,213,119,248,152, 2,44,154,163, 70,221,153,101,155,167, 43,172,9,
129,22,39,253, 19,98,108,110,79,113,224,232,178,185, 112,104,218,246,97,228,
251,34,242,193,238,210,144,12,191,179,162,241, 81,51,145,235,249,14,239,107,
49,192,214, 31,181,199,106,157,184, 84,204,176,115,121,50,45,127, 4,150,254,
138,236,205,93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180
};
But I'm unsure what is the practial purpose of this. What I want to know is:
How is the permutation table used in relation to the grid points?
How is the gradient table generated?
How are the values from the permutation table used with the gradient table? Do the permutation values correspond to indices from the gradient table?

I've been taking the libnoise and perlin noise code apart off and on for a while now so that I could understand how it all worked. I hate working with code I don't understand :)
Walking through http://catlikecoding.com/unity/tutorials/noise/ may help you if you don't use Unity, but you might be able to convert the code accordingly. It helped me alot.
There are various other sites out there with hints and tips. Google libnoise, procedural, etc should show you some examples you can look through.
Basically though the gradients used in noise in conjunction with the integer array are the points around 0,0,0 with a few extra to pad it out to a set number. Using a combination of the integer number picked based on the x,y,z coordinate (0 and 1 indicating each side of the point ) for example such that you have:
// Separate the integer element
int ix0 = int(point.x);
int iy0 = int(point.y);
int iz0 = int(point.z);
// Grab the fractional parts for use later
float tx0 = point.x - ix0;
float ty0 = point.y - iy0;
float tz0 = point.z - iz0;
float tx1 = tx0 - 1f;
float ty1 = ty0 - 1f;
float tz1 = tz0 - 1f;
// Make sure that it is a value compatible with the integer array
ix0 &= hashMask;
iy0 &= hashMask;
iz0 &= hashMask;
// Get the other side of the point
int ix1 = ix0 + 1;
int iy1 = iy0 + 1;
int iz1 = iz0 + 1;
// Grab the integers found at the location in the array
int h0 = hash[ix0];
int h1 = hash[ix1];
int h00 = hash[h0 + iy0];
int h10 = hash[h1 + iy0];
int h01 = hash[h0 + iy1];
int h11 = hash[h1 + iy1];
// Gradient array
private static Vector3[] gradients3D = {
new Vector3( 1f, 1f, 0f),
new Vector3(-1f, 1f, 0f),
new Vector3( 1f,-1f, 0f),
new Vector3(-1f,-1f, 0f),
new Vector3( 1f, 0f, 1f),
new Vector3(-1f, 0f, 1f),
new Vector3( 1f, 0f,-1f),
new Vector3(-1f, 0f,-1f),
new Vector3( 0f, 1f, 1f),
new Vector3( 0f,-1f, 1f),
new Vector3( 0f, 1f,-1f),
new Vector3( 0f,-1f,-1f),
new Vector3( 1f, 1f, 0f),
new Vector3(-1f, 1f, 0f),
new Vector3( 0f,-1f, 1f),
new Vector3( 0f,-1f,-1f)
};
private const int gradientsMask3D = 15;
// Grab the gradient value at the requested point
Vector3 g000 = gradients3D[hash[h00 + iz0] & gradientsMask3D];
Vector3 g100 = gradients3D[hash[h10 + iz0] & gradientsMask3D];
Vector3 g010 = gradients3D[hash[h01 + iz0] & gradientsMask3D];
Vector3 g110 = gradients3D[hash[h11 + iz0] & gradientsMask3D];
Vector3 g001 = gradients3D[hash[h00 + iz1] & gradientsMask3D];
Vector3 g101 = gradients3D[hash[h10 + iz1] & gradientsMask3D];
Vector3 g011 = gradients3D[hash[h01 + iz1] & gradientsMask3D];
Vector3 g111 = gradients3D[hash[h11 + iz1] & gradientsMask3D];
// Calculate the dot product using the vector and respective fractions
float v000 = Dot(g000, tx0, ty0, tz0);
float v100 = Dot(g100, tx1, ty0, tz0);
float v010 = Dot(g010, tx0, ty1, tz0);
float v110 = Dot(g110, tx1, ty1, tz0);
float v001 = Dot(g001, tx0, ty0, tz1);
float v101 = Dot(g101, tx1, ty0, tz1);
float v011 = Dot(g011, tx0, ty1, tz1);
float v111 = Dot(g111, tx1, ty1, tz1);
// Interpolate between 2 dot results using the fractional numbers
l0 = Lerp(v000, v100, tx);
l1 = Lerp(v010, v110, tx);
l2 = Lerp(l0,l1,ty);
l3 = Lerp(v001, v101, tx);
l4 = Lerp(v011, v111, tx);
l5 = Lerp(l3,l4,ty);
l6 = Lerp(l2,l5,tz);
This results in a single number that is a representative of a single unique point in space using the same integer and gradient array. Simply changing the seed and reshuffling the integer array and gradient array will generate a different number allowing you bring uniqueness to an item but using the same code to generate it.
The reason why the integer array is a repeated set of numbers totalling 512 elements is so that the lookups do not accidentally go over the 0-255 limit which the +1 values added in the code above could cause.
If you visualise a line ( 1D x0 - x1 ), a square ( 2D x0,y0 - x1,y1 ) and a cube ( 3D x0,y0,z0 - x1,y1,z1 ) you will hopefully see what the code is doing and that for the most part the code will be very similar.
I tried making my own version of the code but despite several attempts I can now understand why everyone's noise code is so similar. There is really only the one way perlin and similarly simplex noise will work.
So my goal now is to work this functionality into shader equivalent code to help me, at least, to understand the ins and outs of both perlin noise and shader programming. It's a learning curve but it's fun at the same time.
Well hopefully, this has answered all your questions. If you want to know whys and wherefores of Ken Perlin's improved Perlin code check out the following:
http://http.developer.nvidia.com/GPUGems/gpugems_ch05.html - visual of cube

Related

Creating gyroid pattern in 2D image algorithm

I'm trying to fill an image with gyroid lines with certain thickness at certain spacing, but math is not my area. I was able to create a sine wave and shift a bit in the X direction to make it looks like a gyroid but it's not the same.
The idea behind is to stack some images with the same resolution and replicate gyroid into 2D images, so we still have XYZ, where Z can be 0.01mm to 0.1mm per layer
What i've tried:
int sineHeight = 100;
int sineWidth = 100;
int spacing = 100;
int radius = 10;
for (int y1 = 0; y1 < mat.Height; y1 += sineHeight+spacing)
for (int x = 0; x < mat.Width; x++)
{
// Simulating first image
int y2 = (int)(Math.Sin((double)x / sineWidth) * sineHeight / 2.0 + sineHeight / 2.0 + radius);
Circle(mat, new System.Drawing.Point(x, y1+y2), radius, EmguExtensions.WhiteColor, -1, LineType.AntiAlias);
// Simulating second image, shift by x to make it look a bit more with gyroid
y2 = (int)(Math.Sin((double)x / sineWidth + sineWidth) * sineHeight / 2.0 + sineHeight / 2.0 + radius);
Circle(mat, new System.Drawing.Point(x, y1 + y2), radius, EmguExtensions.GreyColor, -1, LineType.AntiAlias);
}
Resulting in: (White represents layer 1 while grey layer 2)
Still, this looks nothing like real gyroid, how can I replicate the formula to work in this space?
You have just single ugly slice because I do not see any z in your code (its correct the surface has horizontal and vertical sin waves like this every 0.5*pi in z).
To see the 3D surface you have to raycast z ...
I would expect some conditional testing of actually iterated x,y,z result of gyroid equation against some small non zero number like if (result<= 1e-6) and draw the stuff only then or compute color from the result instead. This is ideal to do in GLSL.
In case you are not familiar with GLSL and shaders the Fragment shader is executed for each pixel (called fragment) of the rendered QUAD so you just put the code inside your nested x,y for loops and use your x,y instead of pos (you can ignore the Vertex shader its not important).
You got 2 basic options to render this:
Blending the ray casted surface pixels together creating X-Ray like image. It can be combined with SSS techniques to get the impression of glass or semitransparent material. Here simple GLSL example for the blending:
Vertex:
#version 400 core
in vec2 position;
out vec2 pos;
void main(void)
{
pos=position;
gl_Position = vec4(position.xy,0.0,1.0);
}
Fragment:
#version 400 core
in vec2 pos;
out vec3 out_col;
void main(void)
{
float n,x,y,z,dz,d,i,di;
const float scale=2.0*3.1415926535897932384626433832795;
n=100.0; // layers
x=pos.x*scale; // x postion of pixel
y=pos.y*scale; // y postion of pixel
dz=2.0*scale/n; // z step
di=1.0/n; // color increment
i=0.0; // color intensity
for (z=-scale;z<=scale;z+=dz) // do all layers
{
d =sin(x)*cos(y); // compute gyroid equation
d+=sin(y)*cos(z);
d+=sin(z)*cos(x);
if (d<=1e-6) i+=di; // if near surface add to color
}
out_col=vec3(1.0,1.0,1.0)*i;
}
Usage is simple just render 2D quad covering screen without any matrices with corner pos points in range <-1,+1>. Here result:
Another technique is to render first hit to surface creating mesh like image. In order to see the details we need to add basic (double sided) directional lighting for which surface normal is needed. The normal can be computed by simply partialy derivate the equation by x,y,z. As now the surface is opaque then we can stop on first hit and also ray cast just single period in z as anything after that is hidden anyway. Here simple example:
Fragment:
#version 400 core
in vec2 pos; // input fragmen (pixel) position <-1,+1>
out vec3 col; // output fragment (pixel) RGB color <0,1>
void main(void)
{
bool _discard=true;
float N,x,y,z,dz,d,i;
vec3 n,l;
const float pi=3.1415926535897932384626433832795;
const float scale =3.0*pi; // 3.0 periods in x,y
const float scalez=2.0*pi; // 1.0 period in z
N=200.0; // layers per z (quality)
x=pos.x*scale; // <-1,+1> -> [rad]
y=pos.y*scale; // <-1,+1> -> [rad]
dz=2.0*scalez/N; // z step
l=vec3(0.0,0.0,1.0); // light unit direction
i=0.0; // starting color intensity
n=vec3(0.0,0.0,1.0); // starting normal only to get rid o warning
for (z=0.0;z>=-scalez;z-=dz) // raycast z through all layers in view direction
{
// gyroid equation
d =sin(x)*cos(y); // compute gyroid equation
d+=sin(y)*cos(z);
d+=sin(z)*cos(x);
// surface hit test
if (d>1e-6) continue; // skip if too far from surface
_discard=false; // remember that surface was hit
// compute normal
n.x =+cos(x)*cos(y); // partial derivate by x
n.x+=+sin(y)*cos(z);
n.x+=-sin(z)*sin(x);
n.y =-sin(x)*sin(y); // partial derivate by y
n.y+=+cos(y)*cos(z);
n.y+=+sin(z)*cos(x);
n.z =+sin(x)*cos(y); // partial derivate by z
n.z+=-sin(y)*sin(z);
n.z+=+cos(z)*cos(x);
break; // stop raycasting
}
// skip rendering if no hit with surface (hole)
if (_discard) discard;
// directional lighting
n=normalize(n);
i=abs(dot(l,n));
// ambient + directional lighting
i=0.3+(0.7*i);
// output fragment (render pixel)
gl_FragDepth=z; // depth (optional)
col=vec3(1.0,1.0,1.0)*i; // color
}
I hope I did not make error in partial derivates. Here result:
[Edit1]
Based on your code I see it like this (X-Ray like Blending)
var mat = EmguExtensions.InitMat(new System.Drawing.Size(2000, 1080));
double zz, dz, d, i, di = 0;
const double scalex = 2.0 * Math.PI / mat.Width;
const double scaley = 2.0 * Math.PI / mat.Height;
const double scalez = 2.0 * Math.PI;
uint layerCount = 100; // layers
for (int y = 0; y < mat.Height; y++)
{
double yy = y * scaley; // y position of pixel
for (int x = 0; x < mat.Width; x++)
{
double xx = x * scalex; // x position of pixel
dz = 2.0 * scalez / layerCount; // z step
di = 1.0 / layerCount; // color increment
i = 0.0; // color intensity
for (zz = -scalez; zz <= scalez; zz += dz) // do all layers
{
d = Math.Sin(xx) * Math.Cos(yy); // compute gyroid equation
d += Math.Sin(yy) * Math.Cos(zz);
d += Math.Sin(zz) * Math.Cos(xx);
if (d > 1e-6) continue;
i += di; // if near surface add to color
}
i*=255.0;
mat.SetByte(x, y, (byte)(i));
}
}

Confusion about zFar and zNear plane offsets using glm::perspective

I have been using glm to help build a software rasterizer for self education. In my camera class I am using glm::lookat() to create my view matrix and glm::perspective() to create my perspective matrix.
I seem to be getting what I expect for my left, right top and bottom clipping planes. However, I seem to be either doing something wrong for my near/far planes of there is an error in my understanding. I have reached a point in which my "google-fu" has failed me.
Operating under the assumption that I am correctly extracting clip planes from my glm::perspective matrix, and using the general plane equation:
aX+bY+cZ+d = 0
I am getting strange d or "offset" values for my zNear and zFar planes.
It is my understanding that the d value is the value of which I would be shifting/translatin the point P0 of a plane along the normal vector.
They are 0.200200200 and -0.200200200 respectively. However, my normals are correct orientated at +1.0f and -1.f along the z-axis as expected for a plane perpendicular to my z basis vector.
So when testing a point such as the (0, 0, -5) world space against these planes, it is transformed by my view matrix to:
(0, 0, 5.81181192)
so testing it against these plane in a clip chain, said example vertex would be culled.
Here is the start of a camera class establishing the relevant matrices:
static constexpr glm::vec3 UPvec(0.f, 1.f, 0.f);
static constexpr auto zFar = 100.f;
static constexpr auto zNear = 0.1f;
Camera::Camera(glm::vec3 eye, glm::vec3 center, float fovY, float w, float h) :
viewMatrix{ glm::lookAt(eye, center, UPvec) },
perspectiveMatrix{ glm::perspective(glm::radians<float>(fovY), w/h, zNear, zFar) },
frustumLeftPlane {setPlane(0, 1)},
frustumRighPlane {setPlane(0, 0)},
frustumBottomPlane {setPlane(1, 1)},
frustumTopPlane {setPlane(1, 0)},
frstumNearPlane {setPlane(2, 0)},
frustumFarPlane {setPlane(2, 1)},
The frustum objects are based off the following struct:
struct Plane
{
glm::vec4 normal;
float offset;
};
I have extracted the 6 clipping planes from the perspective matrix as below:
Plane Camera::setPlane(const int& row, const bool& sign)
{
float temp[4]{};
Plane plane{};
if (sign == 0)
{
for (int i = 0; i < 4; ++i)
{
temp[i] = perspectiveMatrix[i][3] + perspectiveMatrix[i][row];
}
}
else
{
for (int i = 0; i < 4; ++i)
{
temp[i] = perspectiveMatrix[i][3] - perspectiveMatrix[i][row];
}
}
plane.normal.x = temp[0];
plane.normal.y = temp[1];
plane.normal.z = temp[2];
plane.normal.w = 0.f;
plane.offset = temp[3];
plane.normal = glm::normalize(plane.normal);
return plane;
}
Any help would be appreciated, as now I am at a loss.
Many thanks.
The d parameter of a plane equation describes how much the plane is offset from the origin along the plane normal. This also takes into account the length of the normal.
One can't just normalize the normal without also adjusting the d parameter since normalizing changes the length of the normal. If you want to normalize a plane equation then you also have to apply the division step to the d coordinate:
float normalLength = sqrt(temp[0] * temp[0] + temp[1] * temp[1] + temp[2] * temp[2]);
plane.normal.x = temp[0] / normalLength;
plane.normal.y = temp[1] / normalLength;
plane.normal.z = temp[2] / normalLength;
plane.normal.w = 0.f;
plane.offset = temp[3] / normalLength;
Side note 1: Usually, one would store the offset of a plane equation in the w-coordinate of a vec4 instead of a separate variable. The reason is that the typical operation you perform with it is a point to plane distance check like dist = n * x - d (for a given point x, normal n, offset d, * is dot product), which can then be written as dist = [n, d] * [x, -1].
Side note 2: Most software and also hardware rasterizer perform clipping after the projection step since it's cheaper and easier to implement.

Why are my specular highlights elliptical?

I think these should be circular. I assume there is something wrong with my normals but I haven't found anything wrong with them. Then again, finding a good test for the normals is difficult.
Here is the image:
Here is my shading code for each light, leaving out the recursive part for reflections:
lighting = ( hit.obj.ambient + hit.obj.emission );
const glm::vec3 view_direction = glm::normalize(eye - hit.pos);
const glm::vec3 reflection = glm::normalize(( static_cast<float>(2) * ( glm::dot(view_direction, hit.normal) * hit.normal ) ) - view_direction);
for(int i = 0; i < numused; ++i)
{
glm::vec3 hit_to_light = (lights[i].pos - hit.pos);
float dist = glm::length(hit_to_light);
glm::vec3 light_direction = glm::normalize(hit_to_light);
Ray lightray(hit.pos, light_direction);
Intersection blocked = Intersect(lightray, scene, verbose ? verbose : false);
if( blocked.dist >= dist)
{
glm::vec3 halfangle = glm::normalize(view_direction + light_direction);
float specular_multiplier = pow(std::max(glm::dot(halfangle,hit.normal), 0.f), shininess);
glm::vec3 attenuation_term = lights[i].rgb * (1.0f / (attenuation + dist * linear + dist*dist * quad));
glm::vec3 diffuse_term = hit.obj.diffuse * ( std::max(glm::dot(light_direction,hit.normal) , 0.f) );
glm::vec3 specular_term = hit.obj.specular * specular_multiplier;
}
}
And here is the line where I transform the object space normal to world space:
*norm = glm::normalize(transinv * glm::vec4(glm::normalize(p - sphere_center), 0));
Using the full phong model, instead of blinn-phong, I get teardrop highlights:
If I color pixels according to the (absolute value of the) normal at the intersection point I get the following image (r = x, g = y, b = z):
I've solved this issue. It turns out that the normals were all just slightly off, but not enough that the image colored by normals could depict it.
I found this out by computing the normals on spheres with a uniform scale and a translation.
The problem occurred in the line where I transformed the normals to world space:
*norm = glm::normalize(transinv * glm::vec4(glm::normalize(p - sphere_center), 0));
I assumed that the homogeneous coordinate would be 0 after the transformation because it was zero beforehand (rotations and scales do not affect it, and because it is 0, neither can translations). However, it is not 0 because the matrix is transposed, so the bottom row was filled with the inverse translations, causing the homogeneous coordinate to be nonzero.
The 4-vector is then normalized and the result is assigned to a 3-vector. The constructor for the 3-vector simply removes the last entry, so the normal was left unnormalized.
Here's the final picture:

Why is this basic "rotate around the origin" failing to work?

I've done this a hundred times, but this is my first time with a manually constructed cube made of "sticks", which are 3D lines. It's constructed around the origin, out 5 from the origin in each of the X, Y, and Z directions.
When I rotate it, I'm still "inside it" and it rotates around me (the camera). I'm applying a translation and rotation, so I'm stymied as to what I'm doing wrong.
Here's the basic code to rotate the box, by which I mean generate it's world matrix:
float rotateX = 0.0f, rotateY = 0.0f, rotateZ = 0.0f;
XMFLOAT4 positionBox = XMFLOAT4(0, 0, -50, 1); // Camera at origin looking at this
XMMATRIX matrixCubeWorld;
void CALLBACK OnFrameMove( double fTime, float fElapsedTime, void* pUserContext )
{
auto pCamera = g_GameServices.GetService<CWorldCamera>();
XMMATRIX translation = XMMatrixTranslationFromVector(XMLoadFloat4(&positionBox));
XMMATRIX rotation = XMMatrixRotationRollPitchYaw(rotateX, rotateY, rotateZ);
matrixCubeWorld = rotation * translation;
if (GetKeyState('X') < 0)
rotateX = RotateAround(rotateX, fElapsedTime);
if (GetKeyState('Y') < 0)
rotateY = RotateAround(rotateY, fElapsedTime);
}
And when I set up to draw, I use that matrix:
D3D11_MAPPED_SUBRESOURCE MappedResource;
V(pd3dImmediateContext->Map(_pVertexShaderVariables, 0, D3D11_MAP_WRITE_DISCARD, 0, &MappedResource));
auto pCB = reinterpret_cast<VSCB3DLineChangesEveryFrame *>(MappedResource.pData);
pCB->_gWorldViewProj = matrixCubeWorld * pCamera->GetViewMatrix() * pCamera->GetProjMatrix();
pd3dImmediateContext->Unmap(_pVertexShaderVariables, 0);
return hr;
...and the shader is as simple as can be:
VertexShaderOutput Line3DVertexShaderFunction(float3 position : POSITION, float4 color : COLOR, float2 tex : TEXCOORD0)
{
VertexShaderOutput output;
output.position = mul(float4(position, 1), _gWorldViewProj);
output.color = color;
output.tex = tex;
return output;
}
So do I have a bug or a misunderstanding? I've tried with the inverse of the translation, thinking that would 'bring it back to the origin before rotating' but didn't improve it.
Transformations look good imho.
Maybe it's due to the fact that 'XMMatrixTranslationFromVector'
takes only 3d-vector as the documentation (msdn) says.
Also make sure that RotateAround function and camera view/proj matrices give correct results.
Best regards.

Drawing a circle with a sector cut out in OpenGL ES 1.1

I'm trying to draw the following shape using OpenGL ES 1.1. And well, I'm stuck, I don't really know how to go about it.
My game currently uses Android's Canvas API, which isn't hardware accelerated, so I'm rewriting it with OpenGL ES. The Canvas class has a method called drawArc which makes drawing this shape very very easy; Canvas.drawArc
Any advice/hints on doing the same with OpenGL ES?
Thank you for reading.
void gltDrawArc(unsigned int const segments, float angle_start, float angle_stop)
{
int i;
float const angle_step = (angle_stop - angle_start)/segments;
GLfloat *arc_vertices;
arc_vertices = malloc(2*sizeof(GLfloat) * (segments+2));
arc_vertices[0] = arc_vertices[1] = 0.
for(i=0; i<segments+1; i++) {
arc_vertices[2 + 2*i ] = cos(angle_start + i*angle_step);
arc_vertices[2 + 2*i + 1] = sin(angle_start + i*angle_step);
}
glVertexPointer(2, GL_FLOAT, 0, arc_vertices);
glEnableClientState(GL_VERTEX_ARRAY);
glDrawArrays(GL_TRIANGLE_FAN, 0, segments+2);
free(arc_vertices);
}
What about just sampling the circle at discrete angles and drawing a GL_TRIANGLE_FAN?
EDIT: Something like this will just draw a sector of a unit circle around the origin in 2D:
glBegin(GL_TRIANGLE_FAN);
glVertex2f(0.0f, 0.0f);
for(angle=startAngle; angle<=endAngle; ++angle)
glVertex2f(cos(angle), sin(angle));
glEnd();
Actually take this more as pseudocode, as sin and cos usually work on radians and I'm using degrees, but you should get the point.
I am new to android programming so I am sure there is probably a better way to do this. But I was following the OpenGL ES 1.0 tutorial on the android developers site http://developer.android.com/resources/tutorials/opengl/opengl-es10.html which walks you through drawing a green triangle. You can follow the link and you will see most of the code I used there. I wanted to draw a circle on the triangle. The code I added is based on the above example posted by datenwolf. And is shown in snippets below:
public class HelloOpenGLES10Renderer implements GLSurfaceView.Renderer {
// the number small triangles used to make a circle
public int segments = 100;
public float mAngle;
private FloatBuffer triangleVB;
// array to hold the FloatBuffer for the small triangles
private FloatBuffer [] segmentsArray = new FloatBuffer[segments];
private void initShapes(){
.
.
.
// stuff to draw holes in the board
int i = 0;
float angle_start = 0.0f;
float angle_stop = 2.0f * (float) java.lang.Math.PI;
float angle_step = (angle_stop - angle_start)/segments;
for(i=0; i<segments; i++) {
float[] holeCoords;
FloatBuffer holeVB;
holeCoords = new float [ 9 ];
// initialize vertex Buffer for triangle
// (# of coordinate values * 4 bytes per float)
ByteBuffer vbb2 = ByteBuffer.allocateDirect(holeCoords.length * 4);
vbb2.order(ByteOrder.nativeOrder());// use the device hardware's native byte order
holeVB = vbb2.asFloatBuffer(); // create a floating point buffer from the ByteBuffer
float x1 = 0.05f * (float) java.lang.Math.cos(angle_start + i*angle_step);
float y1 = 0.05f * (float) java.lang.Math.sin(angle_start + i*angle_step);
float z1 = 0.1f;
float x2 = 0.05f * (float) java.lang.Math.cos(angle_start + i+1*angle_step);
float y2 = 0.05f * (float) java.lang.Math.sin(angle_start + i+1*angle_step);
float z2 = 0.1f;
holeCoords[0] = 0.0f;
holeCoords[1] = 0.0f;
holeCoords[2] = 0.1f;
holeCoords[3] = x1;
holeCoords[4] = y1;
holeCoords[5] = z1;
holeCoords[6] = x2;
holeCoords[7] = y2;
holeCoords[8] = z2;
holeVB.put(holeCoords); // add the coordinates to the FloatBuffer
holeVB.position(0); // set the buffer to read the first coordinate
segmentsArray[i] = holeVB;
}
}
.
.
.
public void onDrawFrame(GL10 gl) {
.
.
.
// Draw hole
gl.glColor4f( 1.0f - 0.63671875f, 1.0f - 0.76953125f, 1.0f - 0.22265625f, 0.0f);
for ( int i=0; i<segments; i++ ) {
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, segmentsArray[i]);
gl.glDrawArrays(GL10.GL_TRIANGLES, 0, 3);
}
}

Resources