Creating gyroid pattern in 2D image algorithm - algorithm

I'm trying to fill an image with gyroid lines with certain thickness at certain spacing, but math is not my area. I was able to create a sine wave and shift a bit in the X direction to make it looks like a gyroid but it's not the same.
The idea behind is to stack some images with the same resolution and replicate gyroid into 2D images, so we still have XYZ, where Z can be 0.01mm to 0.1mm per layer
What i've tried:
int sineHeight = 100;
int sineWidth = 100;
int spacing = 100;
int radius = 10;
for (int y1 = 0; y1 < mat.Height; y1 += sineHeight+spacing)
for (int x = 0; x < mat.Width; x++)
{
// Simulating first image
int y2 = (int)(Math.Sin((double)x / sineWidth) * sineHeight / 2.0 + sineHeight / 2.0 + radius);
Circle(mat, new System.Drawing.Point(x, y1+y2), radius, EmguExtensions.WhiteColor, -1, LineType.AntiAlias);
// Simulating second image, shift by x to make it look a bit more with gyroid
y2 = (int)(Math.Sin((double)x / sineWidth + sineWidth) * sineHeight / 2.0 + sineHeight / 2.0 + radius);
Circle(mat, new System.Drawing.Point(x, y1 + y2), radius, EmguExtensions.GreyColor, -1, LineType.AntiAlias);
}
Resulting in: (White represents layer 1 while grey layer 2)
Still, this looks nothing like real gyroid, how can I replicate the formula to work in this space?

You have just single ugly slice because I do not see any z in your code (its correct the surface has horizontal and vertical sin waves like this every 0.5*pi in z).
To see the 3D surface you have to raycast z ...
I would expect some conditional testing of actually iterated x,y,z result of gyroid equation against some small non zero number like if (result<= 1e-6) and draw the stuff only then or compute color from the result instead. This is ideal to do in GLSL.
In case you are not familiar with GLSL and shaders the Fragment shader is executed for each pixel (called fragment) of the rendered QUAD so you just put the code inside your nested x,y for loops and use your x,y instead of pos (you can ignore the Vertex shader its not important).
You got 2 basic options to render this:
Blending the ray casted surface pixels together creating X-Ray like image. It can be combined with SSS techniques to get the impression of glass or semitransparent material. Here simple GLSL example for the blending:
Vertex:
#version 400 core
in vec2 position;
out vec2 pos;
void main(void)
{
pos=position;
gl_Position = vec4(position.xy,0.0,1.0);
}
Fragment:
#version 400 core
in vec2 pos;
out vec3 out_col;
void main(void)
{
float n,x,y,z,dz,d,i,di;
const float scale=2.0*3.1415926535897932384626433832795;
n=100.0; // layers
x=pos.x*scale; // x postion of pixel
y=pos.y*scale; // y postion of pixel
dz=2.0*scale/n; // z step
di=1.0/n; // color increment
i=0.0; // color intensity
for (z=-scale;z<=scale;z+=dz) // do all layers
{
d =sin(x)*cos(y); // compute gyroid equation
d+=sin(y)*cos(z);
d+=sin(z)*cos(x);
if (d<=1e-6) i+=di; // if near surface add to color
}
out_col=vec3(1.0,1.0,1.0)*i;
}
Usage is simple just render 2D quad covering screen without any matrices with corner pos points in range <-1,+1>. Here result:
Another technique is to render first hit to surface creating mesh like image. In order to see the details we need to add basic (double sided) directional lighting for which surface normal is needed. The normal can be computed by simply partialy derivate the equation by x,y,z. As now the surface is opaque then we can stop on first hit and also ray cast just single period in z as anything after that is hidden anyway. Here simple example:
Fragment:
#version 400 core
in vec2 pos; // input fragmen (pixel) position <-1,+1>
out vec3 col; // output fragment (pixel) RGB color <0,1>
void main(void)
{
bool _discard=true;
float N,x,y,z,dz,d,i;
vec3 n,l;
const float pi=3.1415926535897932384626433832795;
const float scale =3.0*pi; // 3.0 periods in x,y
const float scalez=2.0*pi; // 1.0 period in z
N=200.0; // layers per z (quality)
x=pos.x*scale; // <-1,+1> -> [rad]
y=pos.y*scale; // <-1,+1> -> [rad]
dz=2.0*scalez/N; // z step
l=vec3(0.0,0.0,1.0); // light unit direction
i=0.0; // starting color intensity
n=vec3(0.0,0.0,1.0); // starting normal only to get rid o warning
for (z=0.0;z>=-scalez;z-=dz) // raycast z through all layers in view direction
{
// gyroid equation
d =sin(x)*cos(y); // compute gyroid equation
d+=sin(y)*cos(z);
d+=sin(z)*cos(x);
// surface hit test
if (d>1e-6) continue; // skip if too far from surface
_discard=false; // remember that surface was hit
// compute normal
n.x =+cos(x)*cos(y); // partial derivate by x
n.x+=+sin(y)*cos(z);
n.x+=-sin(z)*sin(x);
n.y =-sin(x)*sin(y); // partial derivate by y
n.y+=+cos(y)*cos(z);
n.y+=+sin(z)*cos(x);
n.z =+sin(x)*cos(y); // partial derivate by z
n.z+=-sin(y)*sin(z);
n.z+=+cos(z)*cos(x);
break; // stop raycasting
}
// skip rendering if no hit with surface (hole)
if (_discard) discard;
// directional lighting
n=normalize(n);
i=abs(dot(l,n));
// ambient + directional lighting
i=0.3+(0.7*i);
// output fragment (render pixel)
gl_FragDepth=z; // depth (optional)
col=vec3(1.0,1.0,1.0)*i; // color
}
I hope I did not make error in partial derivates. Here result:
[Edit1]
Based on your code I see it like this (X-Ray like Blending)
var mat = EmguExtensions.InitMat(new System.Drawing.Size(2000, 1080));
double zz, dz, d, i, di = 0;
const double scalex = 2.0 * Math.PI / mat.Width;
const double scaley = 2.0 * Math.PI / mat.Height;
const double scalez = 2.0 * Math.PI;
uint layerCount = 100; // layers
for (int y = 0; y < mat.Height; y++)
{
double yy = y * scaley; // y position of pixel
for (int x = 0; x < mat.Width; x++)
{
double xx = x * scalex; // x position of pixel
dz = 2.0 * scalez / layerCount; // z step
di = 1.0 / layerCount; // color increment
i = 0.0; // color intensity
for (zz = -scalez; zz <= scalez; zz += dz) // do all layers
{
d = Math.Sin(xx) * Math.Cos(yy); // compute gyroid equation
d += Math.Sin(yy) * Math.Cos(zz);
d += Math.Sin(zz) * Math.Cos(xx);
if (d > 1e-6) continue;
i += di; // if near surface add to color
}
i*=255.0;
mat.SetByte(x, y, (byte)(i));
}
}

Related

Confusion about zFar and zNear plane offsets using glm::perspective

I have been using glm to help build a software rasterizer for self education. In my camera class I am using glm::lookat() to create my view matrix and glm::perspective() to create my perspective matrix.
I seem to be getting what I expect for my left, right top and bottom clipping planes. However, I seem to be either doing something wrong for my near/far planes of there is an error in my understanding. I have reached a point in which my "google-fu" has failed me.
Operating under the assumption that I am correctly extracting clip planes from my glm::perspective matrix, and using the general plane equation:
aX+bY+cZ+d = 0
I am getting strange d or "offset" values for my zNear and zFar planes.
It is my understanding that the d value is the value of which I would be shifting/translatin the point P0 of a plane along the normal vector.
They are 0.200200200 and -0.200200200 respectively. However, my normals are correct orientated at +1.0f and -1.f along the z-axis as expected for a plane perpendicular to my z basis vector.
So when testing a point such as the (0, 0, -5) world space against these planes, it is transformed by my view matrix to:
(0, 0, 5.81181192)
so testing it against these plane in a clip chain, said example vertex would be culled.
Here is the start of a camera class establishing the relevant matrices:
static constexpr glm::vec3 UPvec(0.f, 1.f, 0.f);
static constexpr auto zFar = 100.f;
static constexpr auto zNear = 0.1f;
Camera::Camera(glm::vec3 eye, glm::vec3 center, float fovY, float w, float h) :
viewMatrix{ glm::lookAt(eye, center, UPvec) },
perspectiveMatrix{ glm::perspective(glm::radians<float>(fovY), w/h, zNear, zFar) },
frustumLeftPlane {setPlane(0, 1)},
frustumRighPlane {setPlane(0, 0)},
frustumBottomPlane {setPlane(1, 1)},
frustumTopPlane {setPlane(1, 0)},
frstumNearPlane {setPlane(2, 0)},
frustumFarPlane {setPlane(2, 1)},
The frustum objects are based off the following struct:
struct Plane
{
glm::vec4 normal;
float offset;
};
I have extracted the 6 clipping planes from the perspective matrix as below:
Plane Camera::setPlane(const int& row, const bool& sign)
{
float temp[4]{};
Plane plane{};
if (sign == 0)
{
for (int i = 0; i < 4; ++i)
{
temp[i] = perspectiveMatrix[i][3] + perspectiveMatrix[i][row];
}
}
else
{
for (int i = 0; i < 4; ++i)
{
temp[i] = perspectiveMatrix[i][3] - perspectiveMatrix[i][row];
}
}
plane.normal.x = temp[0];
plane.normal.y = temp[1];
plane.normal.z = temp[2];
plane.normal.w = 0.f;
plane.offset = temp[3];
plane.normal = glm::normalize(plane.normal);
return plane;
}
Any help would be appreciated, as now I am at a loss.
Many thanks.
The d parameter of a plane equation describes how much the plane is offset from the origin along the plane normal. This also takes into account the length of the normal.
One can't just normalize the normal without also adjusting the d parameter since normalizing changes the length of the normal. If you want to normalize a plane equation then you also have to apply the division step to the d coordinate:
float normalLength = sqrt(temp[0] * temp[0] + temp[1] * temp[1] + temp[2] * temp[2]);
plane.normal.x = temp[0] / normalLength;
plane.normal.y = temp[1] / normalLength;
plane.normal.z = temp[2] / normalLength;
plane.normal.w = 0.f;
plane.offset = temp[3] / normalLength;
Side note 1: Usually, one would store the offset of a plane equation in the w-coordinate of a vec4 instead of a separate variable. The reason is that the typical operation you perform with it is a point to plane distance check like dist = n * x - d (for a given point x, normal n, offset d, * is dot product), which can then be written as dist = [n, d] * [x, -1].
Side note 2: Most software and also hardware rasterizer perform clipping after the projection step since it's cheaper and easier to implement.

Opengl uvw projection, what's under the hood? [duplicate]

If linear interpolation happens during the rasterization stage in the OpenGL pipeline, and the vertices have already been transformed to screen-space, where does the depth information used for perspectively correct interpolation come from?
Can anybody give a detailed description of how OpenGL goes from screen-space primitives to fragments with correctly interpolated values?
The output of a vertex shader is a four component vector, vec4 gl_Position. From Section 13.6 Coordinate Transformations of core GL 4.4 spec:
Clip coordinates for a vertex result from shader execution, which yields a vertex coordinate gl_Position.
Perspective division on clip coordinates yields normalized device coordinates, followed by a viewport transformation (see section 13.6.1) to convert these coordinates into window coordinates.
OpenGL does the perspective divide as
device.xyz = gl_Position.xyz / gl_Position.w
But then keeps the 1 / gl_Position.w as the last component of gl_FragCoord:
gl_FragCoord.xyz = device.xyz scaled to viewport
gl_FragCoord.w = 1 / gl_Position.w
This transform is bijective, so no depth information is lost. In fact as we see below, the 1 / gl_Position.w is crucial for perspective correct interpolation.
Short introduction to barycentric coordinates
Given a triangle (P0, P1, P2) one can parametrize all the points inside the triangle by the linear combinations of the vertices:
P(b0,b1,b2) = P0*b0 + P1*b1 + P2*b2
where b0 + b1 + b2 = 1 and b0 ≥ 0, b1 ≥ 0, b2 ≥ 0.
Given a point P inside the triangle, the coefficients (b0, b1, b2) that satisfy the equation above are called the barycentric coordinates of that point. For non-degenerate triangles they are unique, and can be calculated as quotients of the areas of the following triangles:
b0(P) = area(P, P1, P2) / area(P0, P1, P2)
b1(P) = area(P0, P, P2) / area(P0, P1, P2)
b2(P) = area(P0, P1, P) / area(P0, P1, P2)
Each bi can be thought of as 'how much of Pi has to be mixed in'. So b = (1,0,0), (0,1,0) and (0,0,1) are the vertices of the triangle, (1/3, 1/3, 1/3) is the barycenter, and so on.
Given an attribute (f0, f1, f2) on the vertices of the triangle, we can now interpolate it over the interior:
f(P) = f0*b0(P) + f1*b1(P) + f2*b2(P)
This is a linear function of P, therefore it is the unique linear interpolant over the given triangle. The math also works in either 2D or 3D.
Perspective correct interpolation
Let's say we fill a projected 2D triangle on the screen. For every fragment we have its window coordinates. First we calculate its barycentric coordinates by inverting the P(b0,b1,b2) function, which is a linear function in window coordinates. This gives us the barycentric coordinates of the fragment on the 2D triangle projection.
Perspective correct interpolation of an attribute would vary linearly in the clip coordinates (and by extension, world coordinates). For that we need to get the barycentric coordinates of the fragment in clip space.
As it happens (see [1] and [2]), the depth of the fragment is not linear in window coordinates, but the depth inverse (1/gl_Position.w) is. Accordingly the attributes and the clip-space barycentric coordinates, when weighted by the depth inverse, vary linearly in window coordinates.
Therefore, we compute the perspective corrected barycentric by:
( b0 / gl_Position[0].w, b1 / gl_Position[1].w, b2 / gl_Position[2].w )
B = -------------------------------------------------------------------------
b0 / gl_Position[0].w + b1 / gl_Position[1].w + b2 / gl_Position[2].w
and then use it to interpolate the attributes from the vertices.
Note: GL_NV_fragment_shader_barycentric exposes the device-linear barycentric coordinates through gl_BaryCoordNoPerspNV and the perspective corrected through gl_BaryCoordNV.
Implementation
Here is a C++ code that rasterizes and shades a triangle on the CPU, in a manner similar to OpenGL. I encourage you to compare it with the shaders listed below:
struct Renderbuffer { int w, h, ys; void *data; };
struct Vert { vec4 position, texcoord, color; };
struct Varying { vec4 texcoord, color; };
void vertex_shader(const Vert &in, vec4 &gl_Position, Varying &OUT) {
OUT.texcoord = in.texcoord;
OUT.color = in.color;
gl_Position = vec4(in.position.x, in.position.y, -2*in.position.z - 2*in.position.w, -in.position.z);
}
void fragment_shader(vec4 &gl_FragCoord, const Varying &IN, vec4 &OUT) {
OUT = IN.color;
vec2 wrapped = IN.texcoord.xy - floor(IN.texcoord.xy);
bool brighter = (wrapped[0] < 0.5) != (wrapped[1] < 0.5);
if(!brighter)
OUT.rgb *= 0.5f;
}
// render output unit/render operations pipeline
void rop(Renderbuffer &buf, int x, int y, const vec4 &c) {
uint8_t *p = (uint8_t*)buf.data + buf.ys*(buf.h - y - 1) + 4*x;
p[0] = linear_to_srgb8(c[0]);
p[1] = linear_to_srgb8(c[1]);
p[2] = linear_to_srgb8(c[2]);
p[3] = lround(c[3]*255);
}
void draw_triangle(Renderbuffer &color_attachment, const box2 &viewport, const Vert *verts) {
auto area = [](const vec2 &p0, const vec2 &p1, const vec2 &p2) { return cross(p1 - p0, p2 - p0); };
auto interpolate = [](const auto a[3], auto p, const vec3 &coord) { return coord.x*a[0].*p + coord.y*a[1].*p + coord.z*a[2].*p; };
Varying perVertex[3];
vec4 gl_Position[3];
box2 aabb = { viewport.hi, viewport.lo };
for(int i = 0; i < 3; ++i) {
vertex_shader(verts[i], gl_Position[i], perVertex[i]);
// convert to normalized device coordinates
gl_Position[i].w = 1/gl_Position[i].w;
gl_Position[i].xyz *= gl_Position[i].w;
// convert to window coordinates
gl_Position[i].xy = mix(viewport.lo, viewport.hi, 0.5f*(gl_Position[i].xy + 1.0f));
aabb = join(aabb, gl_Position[i].xy);
}
const float denom = 1/area(gl_Position[0].xy, gl_Position[1].xy, gl_Position[2].xy);
// loop over all pixels in the rectangle bounding the triangle
const ibox2 iaabb = lround(aabb);
for(int y = iaabb.lo.y; y < iaabb.hi.y; ++y)
for(int x = iaabb.lo.x; x < iaabb.hi.x; ++x)
{
vec4 gl_FragCoord;
gl_FragCoord.xy = vec2(x, y) + 0.5f;
// fragment barycentric coordinates in window coordinates
const vec3 barycentric = denom*vec3(
area(gl_FragCoord.xy, gl_Position[1].xy, gl_Position[2].xy),
area(gl_Position[0].xy, gl_FragCoord.xy, gl_Position[2].xy),
area(gl_Position[0].xy, gl_Position[1].xy, gl_FragCoord.xy)
);
// discard fragment outside the triangle. this doesn't handle edges correctly.
if(barycentric.x < 0 || barycentric.y < 0 || barycentric.z < 0)
continue;
// interpolate inverse depth linearly
gl_FragCoord.z = interpolate(gl_Position, &vec4::z, barycentric);
gl_FragCoord.w = interpolate(gl_Position, &vec4::w, barycentric);
// clip fragments to the near/far planes (as if by GL_ZERO_TO_ONE)
if(gl_FragCoord.z < 0 || gl_FragCoord.z > 1)
continue;
// convert to perspective correct (clip-space) barycentric
const vec3 perspective = 1/gl_FragCoord.w*barycentric*vec3(gl_Position[0].w, gl_Position[1].w, gl_Position[2].w);
// interpolate attributes
Varying varying = {
interpolate(perVertex, &Varying::texcoord, perspective),
interpolate(perVertex, &Varying::color, perspective),
};
vec4 color;
fragment_shader(gl_FragCoord, varying, color);
rop(color_attachment, x, y, color);
}
}
int main(int argc, char *argv[]) {
Renderbuffer buffer = { 512, 512, 512*4 };
buffer.data = calloc(buffer.ys, buffer.h);
// VAO interleaved attributes buffer
Vert verts[] = {
{ { -1, -1, -2, 1 }, { 0, 0, 0, 1 }, { 0, 0, 1, 1 } },
{ { 1, -1, -1, 1 }, { 10, 0, 0, 1 }, { 1, 0, 0, 1 } },
{ { 0, 1, -1, 1 }, { 0, 10, 0, 1 }, { 0, 1, 0, 1 } },
};
box2 viewport = { 0, 0, buffer.w, buffer.h };
draw_triangle(buffer, viewport, verts);
stbi_write_png("out.png", buffer.w, buffer.h, 4, buffer.data, buffer.ys);
}
OpenGL shaders
Here are the OpenGL shaders used to generate the reference image.
Vertex shader:
#version 450 core
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 texcoord;
layout(location = 2) in vec4 color;
out gl_PerVertex { vec4 gl_Position; };
layout(location = 0) out Varying { vec4 texcoord; vec4 color; } OUT;
void main() {
OUT.texcoord = texcoord;
OUT.color = color;
gl_Position = vec4(position.x, position.y, -2*position.z - 2*position.w, -position.z);
}
Fragment shader:
#version 450 core
layout(location = 0) in Varying { vec4 texcoord; vec4 color; } IN;
layout(location = 0) out vec4 OUT;
void main() {
OUT = IN.color;
vec2 wrapped = fract(IN.texcoord.xy);
bool brighter = (wrapped.x < 0.5) != (wrapped.y < 0.5);
if(!brighter)
OUT.rgb *= 0.5;
}
Results
Here are the almost identical images generated by the C++ (left) and OpenGL (right) code:
The differences are caused by different precision and rounding modes.
For comparison, here is one that is not perspective correct (uses barycentric instead of perspective for the interpolation in the code above):
The formula that you will find in the GL specification (look on page 427; the link is the current 4.4 spec, but it has always been that way) for perspective-corrected interpolation of the attribute value in a triangle is:
a * f_a / w_a + b * f_b / w_b + c * f_c / w_c
f=-----------------------------------------------------
a / w_a + b / w_b + c / w_c
where a,b,c denote the barycentric coordinates of the point in the triangle we are interpolating for (a,b,c >=0, a+b+c = 1), f_i the attribute value at vertex i, and w_i the clip space w coordinate of vertex i. Note that the barycentric coordinates are calculated only for the 2D projection of the window space coords of the triangle (so z is ignored).
This is what the formulas that ybungalowbill gave in his fine answer boils down to, in the general case, with an arbitrary projection axis. Actually, the last row of the projection matrix defines just the projection axis the image plane will be orthogonal to, and the clip space w component is just the dot product between the vertex coords and that axis.
In the typical case, the projection matrix has (0,0,-1,0) as the last row, so it transfroms so that w_clip = -z_eye, and this is what ybungalowbill used. However, since w is what we actually will do the division by (that is the only nonlinear step in the whole transformation chain), this will work for any projection axis. It will also work in the trivial case of orthogonal projections where w is always 1 (or at least constant).
Note a few things for an efficient implementation of this. The inversion 1/w_i can be pre-calculated per vertex (let's call them q_i in the following), it does not have to be re-evaluated per fragment. And it is totally free since we divide by w anyway, when going into NDC space, so we can save that value. The GL spec does never describe how a certain feature is to be implemented internally, but the fact that the screen space coordinates will be accessible in glFragCoord.xyz, and gl_FragCoord.w is guaranteed to give the (lineariliy interpolated) 1/w clip space coordinate is quite revealing here. That per-fragment 1_w value is actually the denominator of the formula given above.
The factors a/w_a, b/w_b and c/w_c are each used two times in the formula. And these are also constant for any attribute value, now matter how many attributes there are to be interpolated. So, per fragment, you can calculate a'=q_a * a, b'=q_b * b and c'=q_c and get
a' * f_a + b' * f_b + c' * f_c
f=------------------------------
a' + b' + c'
So the perspective interpolation boils down to
3 additional multiplications,
2 additional additions, and
1 additional division
per fragment.

GLSL uv lookup and precision with FBO / RenderTarget in Three.js

My application is coded in Javascript + Three.js / WebGL + GLSL. I have 200 curves, each one made of 85 points. To animate the curves I add a new point and remove the last.
So I made a positions shader that stores the new positions onto a texture (1) and the lines shader that writes the positions for all curves on another texture (2).
The goal is to use textures as arrays: I know the first and last index of a line, so I need to convert those indices to uv coordinates.
I use FBOHelper to debug FBOs.
1) This 1D texture contains the new points for each curve (200 in total): positionTexture
2) And these are the 200 curves, with all their points, one after the other: linesTexture
The black parts are the BUG here. Those texels shouldn't be black.
How does it work: at each frame the shader looks up the new point for each line in the positionTexture and updates the linesTextures accordingly, with a for loop like this:
#define LINES_COUNT = 200
#define LINE_POINTS = 85 // with 100 it works!!!
// Then in main()
vec2 uv = gl_FragCoord.xy / resolution.xy;
for (float i = 0.0; i < LINES_COUNT; i += 1.0) {
float startIdx = i * LINE_POINTS; // line start index
float endIdx = beginIdx + LINE_POINTS - 1.0; // line end index
vec2 lastCell = getUVfromIndex(endIdx); // last uv coordinate reserved for current line
if (match(lastCell, uv)) {
pos = texture2D( positionTexture, vec2((i / LINES_COUNT) + minFloat, 0.0)).xyz;
} else if (index >= startIdx && index < endIdx) {
pos = texture2D( lineTexture, getNextUV(uv) ).xyz;
}
}
This works, but it's slightly buggy when I have many lines (150+): likely a precision problem. I'm not sure if the functions I wrote to look up the textures are right. I wrote functions like getNextUV(uv) to get the value from the next index (converted to uv coordinates) and copy to the previous. Or match(xy, uv) to know if the current fragment is the texel I want.
I though I could simply use the classic formula:
index = uv.y * width + uv.x
But it's more complicated than that. For example match():
// Wether a point XY is within a UV coordinate
float size = 132.0; // width and height of texture
float unit = 1.0 / size;
float minFloat = unit / size;
bool match(vec2 point, vec2 uv) {
vec2 p = point;
float x = floor(p.x / unit) * unit;
float y = floor(p.y / unit) * unit;
return x <= uv.x && x + unit > uv.x && y <= uv.y && y + unit > uv.y;
}
Or getUVfromIndex():
vec2 getUVfromIndex(float index) {
float row = floor(index / size); // Example: 83.56 / 10 = 8
float col = index - (row * size); // Example: 83.56 - (8 * 10) = 3.56
col = col / size + minFloat; // u = 0.357
row = row / size + minFloat; // v = 0.81
return vec2(col, row);
}
Can someone explain what's the most efficient way to lookup values in a texture, by getting a uv coordinate from index value?
Texture coordinates go from the edge of pixels not the centers so your formula to compute a UV coordinates needs to be
u = (xPixelCoord + .5) / widthOfTextureInPixels;
v = (yPixelCoord + .5) / heightOfTextureInPixels;
So I'm guessing you want getUVfromIndex to be
uniform vec2 sizeOfTexture; // allow texture to be any size
vec2 getUVfromIndex(float index) {
float widthOfTexture = sizeOfTexture.x;
float col = mod(index, widthOfTexture);
float row = floor(index / widthOfTexture);
return (vec2(col, row) + .5) / sizeOfTexture;
}
Or, based on some other experience with math issues in shaders you might need to fudge index
uniform vec2 sizeOfTexture; // allow texture to be any size
vec2 getUVfromIndex(float index) {
float fudgedIndex = index + 0.1;
float widthOfTexture = sizeOfTexture.x;
float col = mod(fudgedIndex, widthOfTexture);
float row = floor(fudgedIndex / widthOfTexture);
return (vec2(col, row) + .5) / sizeOfTexture;
}
If you're in WebGL2 you can use texelFetch which takes integer pixel coordinates to get a value from a texture

Processing - creating circles from current pixels

I'm using processing, and I'm trying to create a circle from the pixels i have on my display.
I managed to pull the pixels on screen and create a growing circle from them.
However i'm looking for something much more sophisticated, I want to make it seem as if the pixels on the display are moving from their current location and forming a turning circle or something like this.
This is what i have for now:
int c = 0;
int radius = 30;
allPixels = removeBlackP();
void draw {
loadPixels();
for (int alpha = 0; alpha < 360; alpha++)
{
float xf = 350 + radius*cos(alpha);
float yf = 350 + radius*sin(alpha);
int x = (int) xf;
int y = (int) yf;
if (radius > 200) {radius =30;break;}
if (c> allPixels.length) {c= 0;}
pixels[y*700 +x] = allPixels[c];
updatePixels();
}
radius++;
c++;
}
the function removeBlackP return an array with all the pixels except for the black ones.
This code works for me. There is an issue that the circle only has the numbers as int so it seems like some pixels inside the circle won't fill, i can live with that. I'm looking for something a bit more complex like I explained.
Thanks!
Fill all pixels of scanlines belonging to the circle. Using this approach, you will paint all places inside the circle. For every line calculate start coordinate (end one is symmetric). Pseudocode:
for y = center_y - radius; y <= center_y + radius; y++
dx = Sqrt(radius * radius - y * y)
for x = center_x - dx; x <= center_x + dx; x++
fill a[y, x]
When you find places for all pixels, you can make correlation between initial pixels places and calculated ones and move them step-by-step.
For example, if initial coordinates relative to center point for k-th pixel are (x0, y0) and final coordinates are (x1,y1), and you want to make M steps, moving pixel by spiral, calculate intermediate coordinates:
calc values once:
r0 = Sqrt(x0*x0 + y0*y0) //Math.Hypot if available
r1 = Sqrt(x1*x1 + y1*y1)
fi0 = Math.Atan2(y0, x0)
fi1 = Math.Atan2(y1, x1)
if fi1 < fi0 then
fi1 = fi1 + 2 * Pi;
for i = 1; i <=M ; i++
x = (r0 + i / M * (r1 - r0)) * Cos(fi0 + i / M * (fi1 - fi0))
y = (r0 + i / M * (r1 - r0)) * Sin(fi0 + i / M * (fi1 - fi0))
shift by center coordinates
The way you go about drawing circles in Processing looks a little convoluted.
The simplest way is to use the ellipse() function, no pixels involved though:
If you do need to draw an ellipse and use pixels, you can make use of PGraphics which is similar to using a separate buffer/"layer" to draw into using Processing drawing commands but it also has pixels[] you can access.
Let's say you want to draw a low-res pixel circle circle, you can create a small PGraphics, disable smoothing, draw the circle, then render the circle at a higher resolution. The only catch is these drawing commands must be placed within beginDraw()/endDraw() calls:
PGraphics buffer;
void setup(){
//disable sketch's aliasing
noSmooth();
buffer = createGraphics(25,25);
buffer.beginDraw();
//disable buffer's aliasing
buffer.noSmooth();
buffer.noFill();
buffer.stroke(255);
buffer.endDraw();
}
void draw(){
background(255);
//draw small circle
float circleSize = map(sin(frameCount * .01),-1.0,1.0,0.0,20.0);
buffer.beginDraw();
buffer.background(0);
buffer.ellipse(buffer.width / 2,buffer.height / 2, circleSize,circleSize);
buffer.endDraw();
//render small circle at higher resolution (blocky - no aliasing)
image(buffer,0,0,width,height);
}
If you want to manually draw a circle using pixels[] you are on the right using the polar to cartesian conversion formula (x = cos(angle) * radius, y = sin(angle) * radius).Even though it's focusing on drawing a radial gradient, you can find an example of drawing a circle(a lot actually) using pixels in this answer

Sprite Kit Shader Uniforms Ignored

The shader that I'm using relies upon the position of the tiles in my game. I haven't found anything on using attribute variables with SKShader objects, so I went with updating the uniform variables. But it would seem that the shader won't communicate with the variables, especially once their values have been updated and changed. I am trying to make a basic lighting effect, but I can't get anything out of the shader at all. Any help? My code for the shader and for the Objective C classes are below.
Shader
uniform float midX, midY;
uniform float posX;
uniform float posY;
void main()
{
vec4 temp = SKDefaultShading(); // get the default shading
float lightRad = 200.0; // Light radius
float dist = distance(vec2(posX, posY), vec2(midX, midY)); // location of the light on the screen
vec4 color = vec4(1.0, 0, 0.0, (float)(dist / lightRad)); // creates an alpha gradient for the light. (falloff)
if (dist < lightRad) // only applies the light color if the distance from the light to the tile is smaller than the radius of the light
{
gl_FragColor = temp * color; // applies the color
}
else // otherwise, do nothing
{
gl_FragColor = temp;
}
}
Code
- (void) loadShaders
{
SKUniform* posX = [SKUniform uniformWithName:#"posX" float: 0.0f]; // adds the x position (with a placeholder value)
SKUniform* posY = [SKUniform uniformWithName:#"posY" float: 0.0f]; // adds the y position (with a placeholder value)
[_shader addUniform:posX];
[_shader addUniform:posY];
}
-(void)update:(CFTimeInterval)currentTime
{
for (int i = 0; i < _array.count; i++) // Loop through all tiles
{
float x = ((i % 100) - 13.5f) * 15.0f; // Calculate x pos of the tile
float y = ((1 - (i / 100)) + 6.5f) * 15.0f; // Calculate y pos of the tile
SKUniform* uniX = [[_tMap getShader] uniformNamed:#"posX"]; // get the uniform with the name posX
uniX.floatValue = x; // set the value of that uniform
SKUniform* uniY = [[_tMap getShader] uniformNamed:#"posY"]; // get the uniform with the name posY
uniY.floatValue = y; // set the value of that uniform
}
}
I'm fairly new to sprite kit, and I'm also new to GLSL.

Resources