Composing a tile's texture coordinates using GLSL - algorithm

Preface
Consider the following example image:
Note the following:
Each tile index increments from left to right, top to bottom
There are only 28 valid tiles (out of a possible 32)
In this example, we know that each tile is 32x32 in pixels
We also know the overall image dimensions are 256x128 in pixels
Question
Given only the above info and a valid tile index, how could we use a vertex shader to construct the proper texture coordinates to use for a desired tile, which would then be passed to the fragment shader for sampling? Luckily, we can make use of gl_VertexID to identify which 'corner' of the image we need... But since we are working with normalized textels (instead of just pixels), we'll need some way of scaling things down between 0 and 1, which further complicates the algorithm.
Here's what I have so far, though it only seems to display a solid color from the image
#version 330 core
layout(location=0) in vec3 in_pos;
out vec2 out_texCoord;
void main()
{
// These will eventually be uniforms/attributes, and are
// only used here for more immediate debugging purposes
int tileNum = 1; // Desired tile index
int tileCount = 28; // Maximum # of tiles
float tileWidth = 32; // Width of each tile in pixels
float tileHeight = 32; // Height of each tile in pixels
float imgWidth = 256; // Overall width of image in pixels
float imgHeight = 128; // Overall height of image in pixels
// Attempt to calculate the correct texture coordinates
// for the desired tile, given only the above info...
int tileIndex = tileNum % tileCount;
int columnCount = int(imgWidth / tileWidth);
int rowCount = int(imgHeight / tileHeight);
int tileX = tileIndex % columnCount;
int tileY = int(float(tileIndex) / float(columnCount));
float startX = float(tileX) * tileWidth;
float startY = float(tileY) * tileHeight;
float endX = 1.0f / (startX + tileWidth);
float endY = 1.0f / (startY + tileHeight);
startX = 1.0f / startX;
startY = 1.0f / startY;
// Check which corner of the image we are working with
int vid = gl_VertexID % 4;
switch(vid) {
case 0:
out_texCoord = vec2(startX, endY);
break;
case 1:
out_texCoord = vec2(endX, endY);
break;
case 2:
out_texCoord = vec2(startX, startY);
break;
case 3:
out_texCoord = vec2(endX, startY);
break;
}
gl_Position = vec4(in_pos, 1);
}
Disclaimer
Before anyone comes barging in talking about a lack of info/code, please note that using the following code does indeed properly display the entire image, as expected...Effectively, this means that there is something wrong with the actual texture coordinate calculations,and not with my application's OpenGL implementation:
int vid = gl_VertexID % 4;
switch(vid) {
case 0:
out_texCoord = vec2(0, 1);
break;
case 1:
out_texCoord = vec2(1, 1);
break;
case 2:
out_texCoord = vec2(0, 0);
break;
case 3:
out_texCoord = vec2(1, 0);
break;
}

Inverting a texture coordinate seems to be a shot in the dark. You just need to scale it down:
float endX = (startX + tileWidth) / imgWidth;
float endY = (startY + tileHeight) / imgHeight;
startX = startX / imgWidth;
startY = startY / imgHeight;

Related

Creating gyroid pattern in 2D image algorithm

I'm trying to fill an image with gyroid lines with certain thickness at certain spacing, but math is not my area. I was able to create a sine wave and shift a bit in the X direction to make it looks like a gyroid but it's not the same.
The idea behind is to stack some images with the same resolution and replicate gyroid into 2D images, so we still have XYZ, where Z can be 0.01mm to 0.1mm per layer
What i've tried:
int sineHeight = 100;
int sineWidth = 100;
int spacing = 100;
int radius = 10;
for (int y1 = 0; y1 < mat.Height; y1 += sineHeight+spacing)
for (int x = 0; x < mat.Width; x++)
{
// Simulating first image
int y2 = (int)(Math.Sin((double)x / sineWidth) * sineHeight / 2.0 + sineHeight / 2.0 + radius);
Circle(mat, new System.Drawing.Point(x, y1+y2), radius, EmguExtensions.WhiteColor, -1, LineType.AntiAlias);
// Simulating second image, shift by x to make it look a bit more with gyroid
y2 = (int)(Math.Sin((double)x / sineWidth + sineWidth) * sineHeight / 2.0 + sineHeight / 2.0 + radius);
Circle(mat, new System.Drawing.Point(x, y1 + y2), radius, EmguExtensions.GreyColor, -1, LineType.AntiAlias);
}
Resulting in: (White represents layer 1 while grey layer 2)
Still, this looks nothing like real gyroid, how can I replicate the formula to work in this space?
You have just single ugly slice because I do not see any z in your code (its correct the surface has horizontal and vertical sin waves like this every 0.5*pi in z).
To see the 3D surface you have to raycast z ...
I would expect some conditional testing of actually iterated x,y,z result of gyroid equation against some small non zero number like if (result<= 1e-6) and draw the stuff only then or compute color from the result instead. This is ideal to do in GLSL.
In case you are not familiar with GLSL and shaders the Fragment shader is executed for each pixel (called fragment) of the rendered QUAD so you just put the code inside your nested x,y for loops and use your x,y instead of pos (you can ignore the Vertex shader its not important).
You got 2 basic options to render this:
Blending the ray casted surface pixels together creating X-Ray like image. It can be combined with SSS techniques to get the impression of glass or semitransparent material. Here simple GLSL example for the blending:
Vertex:
#version 400 core
in vec2 position;
out vec2 pos;
void main(void)
{
pos=position;
gl_Position = vec4(position.xy,0.0,1.0);
}
Fragment:
#version 400 core
in vec2 pos;
out vec3 out_col;
void main(void)
{
float n,x,y,z,dz,d,i,di;
const float scale=2.0*3.1415926535897932384626433832795;
n=100.0; // layers
x=pos.x*scale; // x postion of pixel
y=pos.y*scale; // y postion of pixel
dz=2.0*scale/n; // z step
di=1.0/n; // color increment
i=0.0; // color intensity
for (z=-scale;z<=scale;z+=dz) // do all layers
{
d =sin(x)*cos(y); // compute gyroid equation
d+=sin(y)*cos(z);
d+=sin(z)*cos(x);
if (d<=1e-6) i+=di; // if near surface add to color
}
out_col=vec3(1.0,1.0,1.0)*i;
}
Usage is simple just render 2D quad covering screen without any matrices with corner pos points in range <-1,+1>. Here result:
Another technique is to render first hit to surface creating mesh like image. In order to see the details we need to add basic (double sided) directional lighting for which surface normal is needed. The normal can be computed by simply partialy derivate the equation by x,y,z. As now the surface is opaque then we can stop on first hit and also ray cast just single period in z as anything after that is hidden anyway. Here simple example:
Fragment:
#version 400 core
in vec2 pos; // input fragmen (pixel) position <-1,+1>
out vec3 col; // output fragment (pixel) RGB color <0,1>
void main(void)
{
bool _discard=true;
float N,x,y,z,dz,d,i;
vec3 n,l;
const float pi=3.1415926535897932384626433832795;
const float scale =3.0*pi; // 3.0 periods in x,y
const float scalez=2.0*pi; // 1.0 period in z
N=200.0; // layers per z (quality)
x=pos.x*scale; // <-1,+1> -> [rad]
y=pos.y*scale; // <-1,+1> -> [rad]
dz=2.0*scalez/N; // z step
l=vec3(0.0,0.0,1.0); // light unit direction
i=0.0; // starting color intensity
n=vec3(0.0,0.0,1.0); // starting normal only to get rid o warning
for (z=0.0;z>=-scalez;z-=dz) // raycast z through all layers in view direction
{
// gyroid equation
d =sin(x)*cos(y); // compute gyroid equation
d+=sin(y)*cos(z);
d+=sin(z)*cos(x);
// surface hit test
if (d>1e-6) continue; // skip if too far from surface
_discard=false; // remember that surface was hit
// compute normal
n.x =+cos(x)*cos(y); // partial derivate by x
n.x+=+sin(y)*cos(z);
n.x+=-sin(z)*sin(x);
n.y =-sin(x)*sin(y); // partial derivate by y
n.y+=+cos(y)*cos(z);
n.y+=+sin(z)*cos(x);
n.z =+sin(x)*cos(y); // partial derivate by z
n.z+=-sin(y)*sin(z);
n.z+=+cos(z)*cos(x);
break; // stop raycasting
}
// skip rendering if no hit with surface (hole)
if (_discard) discard;
// directional lighting
n=normalize(n);
i=abs(dot(l,n));
// ambient + directional lighting
i=0.3+(0.7*i);
// output fragment (render pixel)
gl_FragDepth=z; // depth (optional)
col=vec3(1.0,1.0,1.0)*i; // color
}
I hope I did not make error in partial derivates. Here result:
[Edit1]
Based on your code I see it like this (X-Ray like Blending)
var mat = EmguExtensions.InitMat(new System.Drawing.Size(2000, 1080));
double zz, dz, d, i, di = 0;
const double scalex = 2.0 * Math.PI / mat.Width;
const double scaley = 2.0 * Math.PI / mat.Height;
const double scalez = 2.0 * Math.PI;
uint layerCount = 100; // layers
for (int y = 0; y < mat.Height; y++)
{
double yy = y * scaley; // y position of pixel
for (int x = 0; x < mat.Width; x++)
{
double xx = x * scalex; // x position of pixel
dz = 2.0 * scalez / layerCount; // z step
di = 1.0 / layerCount; // color increment
i = 0.0; // color intensity
for (zz = -scalez; zz <= scalez; zz += dz) // do all layers
{
d = Math.Sin(xx) * Math.Cos(yy); // compute gyroid equation
d += Math.Sin(yy) * Math.Cos(zz);
d += Math.Sin(zz) * Math.Cos(xx);
if (d > 1e-6) continue;
i += di; // if near surface add to color
}
i*=255.0;
mat.SetByte(x, y, (byte)(i));
}
}

GLSL uv lookup and precision with FBO / RenderTarget in Three.js

My application is coded in Javascript + Three.js / WebGL + GLSL. I have 200 curves, each one made of 85 points. To animate the curves I add a new point and remove the last.
So I made a positions shader that stores the new positions onto a texture (1) and the lines shader that writes the positions for all curves on another texture (2).
The goal is to use textures as arrays: I know the first and last index of a line, so I need to convert those indices to uv coordinates.
I use FBOHelper to debug FBOs.
1) This 1D texture contains the new points for each curve (200 in total): positionTexture
2) And these are the 200 curves, with all their points, one after the other: linesTexture
The black parts are the BUG here. Those texels shouldn't be black.
How does it work: at each frame the shader looks up the new point for each line in the positionTexture and updates the linesTextures accordingly, with a for loop like this:
#define LINES_COUNT = 200
#define LINE_POINTS = 85 // with 100 it works!!!
// Then in main()
vec2 uv = gl_FragCoord.xy / resolution.xy;
for (float i = 0.0; i < LINES_COUNT; i += 1.0) {
float startIdx = i * LINE_POINTS; // line start index
float endIdx = beginIdx + LINE_POINTS - 1.0; // line end index
vec2 lastCell = getUVfromIndex(endIdx); // last uv coordinate reserved for current line
if (match(lastCell, uv)) {
pos = texture2D( positionTexture, vec2((i / LINES_COUNT) + minFloat, 0.0)).xyz;
} else if (index >= startIdx && index < endIdx) {
pos = texture2D( lineTexture, getNextUV(uv) ).xyz;
}
}
This works, but it's slightly buggy when I have many lines (150+): likely a precision problem. I'm not sure if the functions I wrote to look up the textures are right. I wrote functions like getNextUV(uv) to get the value from the next index (converted to uv coordinates) and copy to the previous. Or match(xy, uv) to know if the current fragment is the texel I want.
I though I could simply use the classic formula:
index = uv.y * width + uv.x
But it's more complicated than that. For example match():
// Wether a point XY is within a UV coordinate
float size = 132.0; // width and height of texture
float unit = 1.0 / size;
float minFloat = unit / size;
bool match(vec2 point, vec2 uv) {
vec2 p = point;
float x = floor(p.x / unit) * unit;
float y = floor(p.y / unit) * unit;
return x <= uv.x && x + unit > uv.x && y <= uv.y && y + unit > uv.y;
}
Or getUVfromIndex():
vec2 getUVfromIndex(float index) {
float row = floor(index / size); // Example: 83.56 / 10 = 8
float col = index - (row * size); // Example: 83.56 - (8 * 10) = 3.56
col = col / size + minFloat; // u = 0.357
row = row / size + minFloat; // v = 0.81
return vec2(col, row);
}
Can someone explain what's the most efficient way to lookup values in a texture, by getting a uv coordinate from index value?
Texture coordinates go from the edge of pixels not the centers so your formula to compute a UV coordinates needs to be
u = (xPixelCoord + .5) / widthOfTextureInPixels;
v = (yPixelCoord + .5) / heightOfTextureInPixels;
So I'm guessing you want getUVfromIndex to be
uniform vec2 sizeOfTexture; // allow texture to be any size
vec2 getUVfromIndex(float index) {
float widthOfTexture = sizeOfTexture.x;
float col = mod(index, widthOfTexture);
float row = floor(index / widthOfTexture);
return (vec2(col, row) + .5) / sizeOfTexture;
}
Or, based on some other experience with math issues in shaders you might need to fudge index
uniform vec2 sizeOfTexture; // allow texture to be any size
vec2 getUVfromIndex(float index) {
float fudgedIndex = index + 0.1;
float widthOfTexture = sizeOfTexture.x;
float col = mod(fudgedIndex, widthOfTexture);
float row = floor(fudgedIndex / widthOfTexture);
return (vec2(col, row) + .5) / sizeOfTexture;
}
If you're in WebGL2 you can use texelFetch which takes integer pixel coordinates to get a value from a texture

Optimize WebGL shader?

I wrote the following shader to render a pattern with a bunch of concentric circles. Eventually I want to have each rotating sphere be a light emitter to create something along these lines.
Of course right now I'm just doing the most basic part to render the different objects.
Unfortunately the shader is incredibly slow (16fps full screen on a high-end macbook). I'm pretty sure this is due to the numerous for loops and branching that I have in the shader. I'm wondering how I can pull off the geometry I'm trying to achieve in a more performance optimized way:
EDIT: you can run the shader here: https://www.shadertoy.com/view/lssyRH
One obvious optimization I am missing is that currently all the fragments are checked against the entire 24 surrounding circles. It would be pretty quick and easy to just discard these checks entirely by checking if the fragment intersects the outer bounds of the diagram. I guess I'm just trying to get a handle on how the best practice is of doing something like this.
#define N 10
#define M 5
#define K 24
#define M_PI 3.1415926535897932384626433832795
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
float aspectRatio = iResolution.x / iResolution.y;
float h = 1.0;
float w = aspectRatio;
vec2 uv = vec2(fragCoord.x / iResolution.x * aspectRatio, fragCoord.y / iResolution.y);
float radius = 0.01;
float orbitR = 0.02;
float orbiterRadius = 0.005;
float centerRadius = 0.002;
float encloseR = 2.0 * orbitR;
float encloserRadius = 0.002;
float spacingX = (w / (float(N) + 1.0));
float spacingY = h / (float(M) + 1.0);
float x = 0.0;
float y = 0.0;
vec4 totalLight = vec4(0.0, 0.0, 0.0, 1.0);
for (int i = 0; i < N; i++) {
for (int j = 0; j < M; j++) {
// compute the center of the diagram
vec2 center = vec2(spacingX * (float(i) + 1.0), spacingY * (float(j) + 1.0));
x = center.x + orbitR * cos(iGlobalTime);
y = center.y + orbitR * sin(iGlobalTime);
vec2 bulb = vec2(x,y);
if (length(uv - center) < centerRadius) {
// frag intersects white center marker
fragColor = vec4(1.0);
return;
} else if (length(uv - bulb) < radius) {
// intersects rotating "light"
fragColor = vec4(uv,0.5+0.5*sin(iGlobalTime),1.0);
return;
} else {
// intersects one of the enclosing 24 cylinders
for(int k = 0; k < K; k++) {
float theta = M_PI * 2.0 * float(k)/ float(K);
x = center.x + cos(theta) * encloseR;
y = center.y + sin(theta) * encloseR;
vec2 encloser = vec2(x,y);
if (length(uv - encloser) < encloserRadius) {
fragColor = vec4(uv,0.5+0.5*sin(iGlobalTime),1.0);
return;
}
}
}
}
}
}
Keeping in mind that you want to optimize the fragment shader, and only the fragment shader:
Move the sin(iGlobalTime) and cos(iGlobalTime) out of the loops, these remain static over the whole draw call so no need to recalculate them every loop iteration.
GPUs employ vectorized instruction sets (SIMD) where possible, take advantage of that. You're wasting lots of cycles by doing multiple scalar ops where you could use a single vector instruction(see annotated code)
[Three years wiser me here: I'm not really sure if this statement is true in regards to how modern GPUs process the instructions, however it certainly does help readability and maybe even give a hint or two to the compiler]
Do your radius checks squared, save that sqrt(length) for when you really need it
Replace float casts of constants(your loop limits) with a float constant(intelligent shader compilers will already do this, not something to count on though)
Don't have undefined behavior in your shader(not writing to gl_FragColor)
Here is an optimized and annotated version of your shader(still containing that undefined behavior, just like the one you provided). Annotation is in the form of:
// annotation
// old code, if any
new code
#define N 10
// define float constant N
#define fN 10.
#define M 5
// define float constant M
#define fM 5.
#define K 24
// define float constant K
#define fK 24.
#define M_PI 3.1415926535897932384626433832795
// predefine 2 times PI
#define M_PI2 6.28318531
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
float aspectRatio = iResolution.x / iResolution.y;
// we dont need these separate
// float h = 1.0;
// float w = aspectRatio;
// use vector ops(2 divs 1 mul => 1 div 1 mul)
// vec2 uv = vec2(fragCoord.x / iResolution.x * aspectRatio, fragCoord.y / iResolution.y);
vec2 uv = fragCoord.xy / iResolution.xy;
uv.x *= aspectRatio;
// most of the following declarations should be predefined or marked as "const"...
float radius = 0.01;
// precalc squared radius
float radius2 = radius*radius;
float orbitR = 0.02;
float orbiterRadius = 0.005;
float centerRadius = 0.002;
// precalc squared center radius
float centerRadius2 = centerRadius * centerRadius;
float encloseR = 2.0 * orbitR;
float encloserRadius = 0.002;
// precalc squared encloser radius
float encloserRadius2 = encloserRadius * encloserRadius;
// Use float constants and vector ops here(2 casts 2 adds 2 divs => 1 add 1 div)
// float spacingX = w / (float(N) + 1.0);
// float spacingY = h / (float(M) + 1.0);
vec2 spacing = vec2(aspectRatio, 1.0) / (vec2(fN, fM)+1.);
// calc sin and cos of global time
// saves N*M(sin,cos,2 muls)
vec2 stct = vec2(sin(iGlobalTime), cos(iGlobalTime));
vec2 orbit = orbitR * stct;
// not needed anymore
// float x = 0.0;
// float y = 0.0;
// was never used
// vec4 totalLight = vec4(0.0, 0.0, 0.0, 1.0);
for (int i = 0; i < N; i++) {
for (int j = 0; j < M; j++) {
// compute the center of the diagram
// Use vector ops
// vec2 center = vec2(spacingX * (float(i) + 1.0), spacingY * (float(j) + 1.0));
vec2 center = spacing * (vec2(i,j)+1.0);
// Again use vector opts, use precalced time trig(orbit = orbitR * stct)
// x = center.x + orbitR * cos(iGlobalTime);
// y = center.y + orbitR * sin(iGlobalTime);
// vec2 bulb = vec2(x,y);
vec2 bulb = center + orbit;
// calculate offsets
vec2 centerOffset = uv - center;
vec2 bulbOffset = uv - bulb;
// use squared length check
// if (length(uv - center) < centerRadius) {
if (dot(centerOffset, centerOffset) < centerRadius2) {
// frag intersects white center marker
fragColor = vec4(1.0);
return;
// use squared length check
// } else if (length(uv - bulb) < radius) {
} else if (dot(bulbOffset, bulbOffset) < radius2) {
// Use precalced sin global time in stct.x
// intersects rotating "light"
fragColor = vec4(uv,0.5+0.5*stct.x,1.0);
return;
} else {
// intersects one of the enclosing 24 cylinders
for(int k = 0; k < K; k++) {
// use predefined 2*PI and float K
float theta = M_PI2 * float(k) / fK;
// Use vector ops(2 muls 2 adds => 1 mul 1 add)
// x = center.x + cos(theta) * encloseR;
// y = center.y + sin(theta) * encloseR;
// vec2 encloser = vec2(x,y);
vec2 encloseOffset = uv - (center + vec2(cos(theta),sin(theta)) * encloseR);
if (dot(encloseOffset,encloseOffset) < encloserRadius2) {
fragColor = vec4(uv,0.5+0.5*stct.x,1.0);
return;
}
}
}
}
}
}
I did a little more thinking ... I realized the best way to optimize it is to actually change the logic so that before doing intersection tests on the small circles it checks the bounds of the group of circles. This got it to run at 60fps:
Example here:
https://www.shadertoy.com/view/lssyRH

Processing mirror image over x axis?

I was able to copy the image to the location but not able to mirror it. what am i missing?
PImage img;
float srcY;
float srcX;
int destX;
int destY;
img = loadImage("http://oldpalmgolfclub.com/wp-content/uploads/2012/02/Palm- Beach-State-College2-e1329949470871.jpg");
size(img.width, img.height * 2);
image(img, 0, 0);
image(img, 0, 330);
int num_pixels = img.width * img.height;
int copiedWidth = 319 - 254;
int copiedHeight = 85 - 22;
int startX = (width / 2) - (copiedWidth / 2);
int startY = (height / 2) - (copiedHeight / 2);
How about simply scaling by -1 on the x axis ?
PImage img;
img = loadImage("https://processing.org/img/processing-web.png");
size(img.width, img.height * 2);
image(img,0,0);
scale(-1,1);//flip on X axis
image(img,-img.width,img.height);//draw offset
This can be achieved by manipulating pixels as well, but needs a bit of arithmetic:
PImage img;
img = loadImage("https://processing.org/img/processing-web.png");
size(img.width, img.height * 2);
int t = millis();
PImage flipped = createImage(img.width,img.height,RGB);//create a new image with the same dimensions
for(int i = 0 ; i < flipped.pixels.length; i++){ //loop through each pixel
int srcX = i % flipped.width; //calculate source(original) x position
int dstX = flipped.width-srcX-1; //calculate destination(flipped) x position = (maximum-x-1)
int y = i / flipped.width; //calculate y coordinate
flipped.pixels[y*flipped.width+dstX] = img.pixels[i];//write the destination(x flipped) pixel based on the current pixel
}
//y*width+x is to convert from x,y to pixel array index
flipped.updatePixels()
println("done in " + (millis()-t) + "ms");
image(img,0,0);
image(flipped,0,img.height);
The above can be achieved using get() and set(), but using the pixels[] array is faster. A single for loop is generally faster than using 2 nested for loops to traverse the image with x,y counters:
PImage img;
img = loadImage("https://processing.org/img/processing-web.png");
size(img.width, img.height * 2);
int t = millis();
PImage flipped = createImage(img.width,img.height,RGB);//create a new image with the same dimensions
for(int y = 0; y < img.height; y++){
for(int x = 0; x < img.width; x++){
flipped.set(img.width-x-1,y,img.get(x,y));
}
}
println("done in " + (millis()-t) + "ms");
image(img,0,0);
image(flipped,0,img.height);
You can copy a 1px 'slice'/column in a single for loop and which is faster(but still not as fast as direct pixel manipulation):
PImage img;
img = loadImage("https://processing.org/img/processing-web.png");
size(img.width, img.height * 2);
int t = millis();
PImage flipped = createImage(img.width,img.height,RGB);//create a new image with the same dimensions
for(int x = 0 ; x < flipped.width; x++){ //loop through each columns
flipped.set(flipped.width-x-1,0,img.get(x,0,1,img.height)); //copy a column in reverse x order
}
println("done in " + (millis()-t) + "ms");
image(img,0,0);
image(flipped,0,img.height);
There are other alternatives like accessing the java BufferedImage (although this means the Processing sketch will work in Java Mode mostly) or using a PShader, but these approaches are more complex. It's generally a good idea to keep things simple (especially when getting started).

Drawing a circle with a sector cut out in OpenGL ES 1.1

I'm trying to draw the following shape using OpenGL ES 1.1. And well, I'm stuck, I don't really know how to go about it.
My game currently uses Android's Canvas API, which isn't hardware accelerated, so I'm rewriting it with OpenGL ES. The Canvas class has a method called drawArc which makes drawing this shape very very easy; Canvas.drawArc
Any advice/hints on doing the same with OpenGL ES?
Thank you for reading.
void gltDrawArc(unsigned int const segments, float angle_start, float angle_stop)
{
int i;
float const angle_step = (angle_stop - angle_start)/segments;
GLfloat *arc_vertices;
arc_vertices = malloc(2*sizeof(GLfloat) * (segments+2));
arc_vertices[0] = arc_vertices[1] = 0.
for(i=0; i<segments+1; i++) {
arc_vertices[2 + 2*i ] = cos(angle_start + i*angle_step);
arc_vertices[2 + 2*i + 1] = sin(angle_start + i*angle_step);
}
glVertexPointer(2, GL_FLOAT, 0, arc_vertices);
glEnableClientState(GL_VERTEX_ARRAY);
glDrawArrays(GL_TRIANGLE_FAN, 0, segments+2);
free(arc_vertices);
}
What about just sampling the circle at discrete angles and drawing a GL_TRIANGLE_FAN?
EDIT: Something like this will just draw a sector of a unit circle around the origin in 2D:
glBegin(GL_TRIANGLE_FAN);
glVertex2f(0.0f, 0.0f);
for(angle=startAngle; angle<=endAngle; ++angle)
glVertex2f(cos(angle), sin(angle));
glEnd();
Actually take this more as pseudocode, as sin and cos usually work on radians and I'm using degrees, but you should get the point.
I am new to android programming so I am sure there is probably a better way to do this. But I was following the OpenGL ES 1.0 tutorial on the android developers site http://developer.android.com/resources/tutorials/opengl/opengl-es10.html which walks you through drawing a green triangle. You can follow the link and you will see most of the code I used there. I wanted to draw a circle on the triangle. The code I added is based on the above example posted by datenwolf. And is shown in snippets below:
public class HelloOpenGLES10Renderer implements GLSurfaceView.Renderer {
// the number small triangles used to make a circle
public int segments = 100;
public float mAngle;
private FloatBuffer triangleVB;
// array to hold the FloatBuffer for the small triangles
private FloatBuffer [] segmentsArray = new FloatBuffer[segments];
private void initShapes(){
.
.
.
// stuff to draw holes in the board
int i = 0;
float angle_start = 0.0f;
float angle_stop = 2.0f * (float) java.lang.Math.PI;
float angle_step = (angle_stop - angle_start)/segments;
for(i=0; i<segments; i++) {
float[] holeCoords;
FloatBuffer holeVB;
holeCoords = new float [ 9 ];
// initialize vertex Buffer for triangle
// (# of coordinate values * 4 bytes per float)
ByteBuffer vbb2 = ByteBuffer.allocateDirect(holeCoords.length * 4);
vbb2.order(ByteOrder.nativeOrder());// use the device hardware's native byte order
holeVB = vbb2.asFloatBuffer(); // create a floating point buffer from the ByteBuffer
float x1 = 0.05f * (float) java.lang.Math.cos(angle_start + i*angle_step);
float y1 = 0.05f * (float) java.lang.Math.sin(angle_start + i*angle_step);
float z1 = 0.1f;
float x2 = 0.05f * (float) java.lang.Math.cos(angle_start + i+1*angle_step);
float y2 = 0.05f * (float) java.lang.Math.sin(angle_start + i+1*angle_step);
float z2 = 0.1f;
holeCoords[0] = 0.0f;
holeCoords[1] = 0.0f;
holeCoords[2] = 0.1f;
holeCoords[3] = x1;
holeCoords[4] = y1;
holeCoords[5] = z1;
holeCoords[6] = x2;
holeCoords[7] = y2;
holeCoords[8] = z2;
holeVB.put(holeCoords); // add the coordinates to the FloatBuffer
holeVB.position(0); // set the buffer to read the first coordinate
segmentsArray[i] = holeVB;
}
}
.
.
.
public void onDrawFrame(GL10 gl) {
.
.
.
// Draw hole
gl.glColor4f( 1.0f - 0.63671875f, 1.0f - 0.76953125f, 1.0f - 0.22265625f, 0.0f);
for ( int i=0; i<segments; i++ ) {
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, segmentsArray[i]);
gl.glDrawArrays(GL10.GL_TRIANGLES, 0, 3);
}
}

Resources