Calculating and storing pixelated ellipse - matrix

I was wondering if it is possible to create a function (arbitrary of language) that has as input a width and height.
This function would then calculate the biggest ellipse that would fit inside of the dimensions that it is given, and store this in a matrix such as these two examples;
In the left example, the width is 14 and height is 27, where the white part is the ellipse.
In the right example, the width is 38 and height is 21, where, once again, the white part is the ellipse.
Of course, the black and white parts can be seen as true/false values if they are part of the ellipse or not.

Yes it is possible. The process is called ellipse rasterization. Here few methods to do so:
let our image has xs,ys resolution so center (x0,y0) and semiaxises a,b are:
x0=xs/2
y0=y2/2
a =x0-1
b =y0-1
using ellipse equation
so 2 nested for loops + if condition deciding if you are inside or outside ellipse.
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
if (((x-x0)*(x-x0)/(a*a))+((y-y0)*(y-y0)/(b*b))<=1.0) pixel[y][x]=color_inside;
else pixel[y][x]=color_outside;
You can optimize this quite a lot by pre-computing the parts of the equations only if thy change so some are computed just once others on each x iteration and the rest on each y iteration. Also is better to multiply instead of dividing.
using parametric ellipse equation
x(t) = x0 + a*cos(t)
y(t) = y0 + b*sin(t)
t = <0,2.0*M_PI> // for whole ellipse
so one for loop creating quadrant coordinates and filling lines inside and outside for the 3 mirrors of the quadrant using only horizontal or only vertical lines. However this approach need a buffer to store the circumference points of one quadrant.
Using Bresenham ellipse algorithm
Using any Circle algorithm and stretch to ellipse
so simply use square area of size of the lesser resolution from xs,ys render circle and than stretch back to xs,ys. If you do not stretch during rasterization than you might create artifacts. In such case is better to use the bigger resolution and stretch down but that is slower of coarse.

Drawing an ellipse and storing it in a matrix can be accomplished with two different methods: either Rasterization (the recommended way) or pixel-by-pixel rendering. According to #Spektre's comment, I wonder if both methods are called "rasterization" since they both render the ellipse to raster image. Anyway, I'll explain how to use both methods in C++ to draw an ellipse and store it in your matrix.
Note: Here I'll assume that the origin of your matrix matrix[0][0] refers to the upper-left corner of the image. So points on the matrix are described by x- and y-coordinate pairs, such that x-coordinates increase to the right; y-coordinates increase from top to bottom.
Pixel-by-pixel ellipse rendering
With this method, you loop over all the pixels in your matrix to determine whether each pixel is inside or outside of the ellipse. If the pixel is inside, you make it white, otherwise, you make it black.
In the following example code, the isPointOnEllipse function determines the status of a point relative to the ellipse. It takes the coordinates of the point, coordinates of the center of the ellipse, and the lengths of semi-major and semi-minor axes as parameters. It then returns either one of the values PS_OUTSIDE, PS_ONPERIM, or PS_INSIDE, which indicate that the point lies outside of the ellipse, the point lies exactly on the ellipse's perimeter, or the point lies inside of the ellipse, respectively.
Obviously, if the point status is PS_ONPERIM, then the point is also part of the ellipse and must be made white; because the ellipse's outline must be colored in addition to its inner area.
You must call ellipseInMatrixPBP function to draw an ellipse, passing it a pointer to your matrix, and the width and height of your matrix. This function loops through every pixel in your matrix, and then calls isPointOnEllipse for every pixel to see if it is inside or outside of the ellipse. Finally, it modifies the pixel accordingly.
#include <math.h>
// Indicates the point lies outside of the ellipse.
#define PS_OUTSIDE (0)
// Indicates the point lies exactly on the perimeter of the ellipse.
#define PS_ONPERIM (1)
// Indicates the point lies inside of the ellipse.
#define PS_INSIDE (2)
short isPointOnEllipse(int cx, int cy, int rx, int ry, int x, int y)
{
double m = (x - cx) * ((double) ry) / ((double) rx);
double n = y - cy;
double h = sqrt(m * m + n * n);
if (h == ry)
return PS_ONPERIM;
else if (h < ry)
return PS_INSIDE;
else
return PS_OUTSIDE;
}
void ellipseInMatrixPBP(bool **matrix, int width, int height)
{
// So the ellipse shall be stretched to the whole matrix
// with a one-pixel margin.
int cx = width / 2;
int cy = height / 2;
int rx = cx - 1;
int ry = cy - 1;
int x, y;
short pointStatus;
// Loop through all the pixels in the matrix.
for (x = 0;x < width;x++)
{
for (y = 0;y < height;y++)
{
pointStatus = isPointOnEllipse(cx, cy, rx, ry, x, y);
// If the current pixel is outside of the ellipse,
// make it black (false).
// Else if the pixel is inside of the ellipse or on its perimeter,
// make it white (true).
if (pointStatus == PS_OUTSIDE)
matrix[x][y] = false;
else
matrix[x][y] = true;
}
}
}
Ellipse rasterization
If the pixel-by-pixel approach to rendering is too slow, then use the rasterization method. Here you determine which pixels in the matrix the ellipse affects, and then you modify those pixels (e.g. you turn them white). Unlike pixel-by-pixel rendering, rasterization does not have to pass through the pixels that are outside of the ellipse shape, which is why this approach is so faster.
To rasterize the ellipse, it is recommended that you use the so-called Mid-point Ellipse algorithm, which is an extended form of Bresenham's circle algorithm.
However, I've discovered an ellipse-drawing algorithm which is probably sophisticated enough (except for its performance) to compete with Bresenham's! So I'll post the function that you want - written in C++.
The following code defines a function named ellipseInMatrix that draws an ellipse with a one-pixel stroke, but does not fill that ellipse. You need to pass this function a pointer to the matrix that you have already allocated and initialized to false values, plus the dimensions of the matrix as integers. Note that ellipseInMatrix internally calls the rasterizeEllipse function which performs the main rasterizing operation. Whenever this function finds a point of the ellipse, it sets the corresponding pixel in the matrix to true, which causes the pixel to turn white.
#define pi (2 * acos(0.0))
#define coord_nil (-1)
struct point
{
int x;
int y;
};
double getEllipsePerimeter(int rx, int ry)
{
return pi * sqrt(2 * (rx * rx + ry * ry));
}
void getPointOnEllipse(int cx, int cy, int rx, int ry, double d, struct point *pp)
{
double theta = d * sqrt(2.0 / (rx * rx + ry * ry));
// double theta = 2 * pi * d / getEllipsePerimeter(rx, ry);
pp->x = (int) floor(cx + cos(theta) * rx);
pp->y = (int) floor(cy - sin(theta) * ry);
}
void rasterizeEllipse(bool **matrix, int cx, int cy, int rx, int ry)
{
struct point currentPoint, midPoint;
struct point previousPoint = {coord_nil, coord_nil};
double perimeter = floor(getEllipsePerimeter(rx, ry));
double i;
// Loop over the perimeter of the ellipse to determine all points on the ellipse path.
for (i = 0.0;i < perimeter;i++)
{
// Find the current point and determine its coordinates.
getPointOnEllipse(cx, cy, rx, ry, i, &currentPoint);
// So color the current point.
matrix[currentPoint.x][currentPoint.y] = true;
// So check if the previous point exists. Please note that if the current
// point is the first point (i = 0), then there will be no previous point.
if (previousPoint.x != coord_nil)
{
// Now check if there is a gap between the current point and the previous
// point. We know it's not OK to have gaps along the ellipse path!
if (!((currentPoint.x - 1 <= previousPoint.x) && (previousPoint.x <= currentPoint.x + 1) &&
(currentPoint.y - 1 <= previousPoint.y) && (previousPoint.y <= currentPoint.y + 1)))
{
// Find the missing point by defining its offset as a fraction
// between the current point offset and the previous point offset.
getPointOnEllipse(cx, cy, rx, ry, i - 0.5, &midPoint);
matrix[midPoint.x][midPoint.y] = true;
}
}
previousPoint.x = currentPoint.x;
previousPoint.y = currentPoint.y;
}
}
void ellipseInMatrix(bool **matrix, int width, int height)
{
// So the ellipse shall be stretched to the whole matrix
// with a one-pixel margin.
int cx = width / 2;
int cy = height / 2;
int rx = cx - 1;
int ry = cy - 1;
// Call the general-purpose ellipse rasterizing function.
rasterizeEllipse(matrix, cx, cy, rx, ry);
}
If you need to fill the ellipse with white pixels like the examples that you provided, you can use the following code instead to rasterize a filled ellipse. Call the filledEllipseInMatrix function with a similar syntax to the previous function.
#define pi (2 * acos(0.0))
#define coord_nil (-1)
struct point
{
int x;
int y;
};
double getEllipsePerimeter(int rx, int ry)
{
return pi * sqrt(2 * (rx * rx + ry * ry));
}
void getPointOnEllipse(int cx, int cy, int rx, int ry, double d, struct point *pp)
{
double theta = d * sqrt(2.0 / (rx * rx + ry * ry));
// double theta = 2 * pi * d / getEllipsePerimeter(rx, ry);
pp->x = (int) floor(cx + cos(theta) * rx);
pp->y = (int) floor(cy - sin(theta) * ry);
}
void fillBar(struct point seed, bool **matrix, int cx)
{
int bx;
if (seed.x > cx)
{
for (bx = seed.x;bx >= cx;bx--)
matrix[bx][seed.y] = true;
}
else
{
for (bx = seed.x;bx <= cx;bx++)
matrix[bx][seed.y] = true;
}
}
void rasterizeFilledEllipse(bool **matrix, int cx, int cy, int rx, int ry)
{
struct point currentPoint, midPoint;
struct point previousPoint = {coord_nil, coord_nil};
double perimeter = floor(getEllipsePerimeter(rx, ry));
double i;
// Loop over the perimeter of the ellipse to determine all points on the ellipse path.
for (i = 0.0;i < perimeter;i++)
{
// Find the current point and determine its coordinates.
getPointOnEllipse(cx, cy, rx, ry, i, &currentPoint);
// So fill the bar (horizontal line) that leads from
// the current point to the minor axis.
fillBar(currentPoint, matrix, cx);
// So check if the previous point exists. Please note that if the current
// point is the first point (i = 0), then there will be no previous point.
if (previousPoint.x != coord_nil)
{
// Now check if there is a gap between the current point and the previous
// point. We know it's not OK to have gaps along the ellipse path!
if (!((currentPoint.x - 1 <= previousPoint.x) && (previousPoint.x <= currentPoint.x + 1) &&
(currentPoint.y - 1 <= previousPoint.y) && (previousPoint.y <= currentPoint.y + 1)))
{
// Find the missing point by defining its offset as a fraction
// between the current point offset and the previous point offset.
getPointOnEllipse(cx, cy, rx, ry, i - 0.5, &midPoint);
fillBar(midPoint, matrix, cx);
}
}
previousPoint.x = currentPoint.x;
previousPoint.y = currentPoint.y;
}
}
void filledEllipseInMatrix(bool **matrix, int width, int height)
{
// So the ellipse shall be stretched to the whole matrix
// with a one-pixel margin.
int cx = width / 2;
int cy = height / 2;
int rx = cx - 1;
int ry = cy - 1;
// Call the general-purpose ellipse rasterizing function.
rasterizeFilledEllipse(matrix, cx, cy, rx, ry);
}

Related

Creating gyroid pattern in 2D image algorithm

I'm trying to fill an image with gyroid lines with certain thickness at certain spacing, but math is not my area. I was able to create a sine wave and shift a bit in the X direction to make it looks like a gyroid but it's not the same.
The idea behind is to stack some images with the same resolution and replicate gyroid into 2D images, so we still have XYZ, where Z can be 0.01mm to 0.1mm per layer
What i've tried:
int sineHeight = 100;
int sineWidth = 100;
int spacing = 100;
int radius = 10;
for (int y1 = 0; y1 < mat.Height; y1 += sineHeight+spacing)
for (int x = 0; x < mat.Width; x++)
{
// Simulating first image
int y2 = (int)(Math.Sin((double)x / sineWidth) * sineHeight / 2.0 + sineHeight / 2.0 + radius);
Circle(mat, new System.Drawing.Point(x, y1+y2), radius, EmguExtensions.WhiteColor, -1, LineType.AntiAlias);
// Simulating second image, shift by x to make it look a bit more with gyroid
y2 = (int)(Math.Sin((double)x / sineWidth + sineWidth) * sineHeight / 2.0 + sineHeight / 2.0 + radius);
Circle(mat, new System.Drawing.Point(x, y1 + y2), radius, EmguExtensions.GreyColor, -1, LineType.AntiAlias);
}
Resulting in: (White represents layer 1 while grey layer 2)
Still, this looks nothing like real gyroid, how can I replicate the formula to work in this space?
You have just single ugly slice because I do not see any z in your code (its correct the surface has horizontal and vertical sin waves like this every 0.5*pi in z).
To see the 3D surface you have to raycast z ...
I would expect some conditional testing of actually iterated x,y,z result of gyroid equation against some small non zero number like if (result<= 1e-6) and draw the stuff only then or compute color from the result instead. This is ideal to do in GLSL.
In case you are not familiar with GLSL and shaders the Fragment shader is executed for each pixel (called fragment) of the rendered QUAD so you just put the code inside your nested x,y for loops and use your x,y instead of pos (you can ignore the Vertex shader its not important).
You got 2 basic options to render this:
Blending the ray casted surface pixels together creating X-Ray like image. It can be combined with SSS techniques to get the impression of glass or semitransparent material. Here simple GLSL example for the blending:
Vertex:
#version 400 core
in vec2 position;
out vec2 pos;
void main(void)
{
pos=position;
gl_Position = vec4(position.xy,0.0,1.0);
}
Fragment:
#version 400 core
in vec2 pos;
out vec3 out_col;
void main(void)
{
float n,x,y,z,dz,d,i,di;
const float scale=2.0*3.1415926535897932384626433832795;
n=100.0; // layers
x=pos.x*scale; // x postion of pixel
y=pos.y*scale; // y postion of pixel
dz=2.0*scale/n; // z step
di=1.0/n; // color increment
i=0.0; // color intensity
for (z=-scale;z<=scale;z+=dz) // do all layers
{
d =sin(x)*cos(y); // compute gyroid equation
d+=sin(y)*cos(z);
d+=sin(z)*cos(x);
if (d<=1e-6) i+=di; // if near surface add to color
}
out_col=vec3(1.0,1.0,1.0)*i;
}
Usage is simple just render 2D quad covering screen without any matrices with corner pos points in range <-1,+1>. Here result:
Another technique is to render first hit to surface creating mesh like image. In order to see the details we need to add basic (double sided) directional lighting for which surface normal is needed. The normal can be computed by simply partialy derivate the equation by x,y,z. As now the surface is opaque then we can stop on first hit and also ray cast just single period in z as anything after that is hidden anyway. Here simple example:
Fragment:
#version 400 core
in vec2 pos; // input fragmen (pixel) position <-1,+1>
out vec3 col; // output fragment (pixel) RGB color <0,1>
void main(void)
{
bool _discard=true;
float N,x,y,z,dz,d,i;
vec3 n,l;
const float pi=3.1415926535897932384626433832795;
const float scale =3.0*pi; // 3.0 periods in x,y
const float scalez=2.0*pi; // 1.0 period in z
N=200.0; // layers per z (quality)
x=pos.x*scale; // <-1,+1> -> [rad]
y=pos.y*scale; // <-1,+1> -> [rad]
dz=2.0*scalez/N; // z step
l=vec3(0.0,0.0,1.0); // light unit direction
i=0.0; // starting color intensity
n=vec3(0.0,0.0,1.0); // starting normal only to get rid o warning
for (z=0.0;z>=-scalez;z-=dz) // raycast z through all layers in view direction
{
// gyroid equation
d =sin(x)*cos(y); // compute gyroid equation
d+=sin(y)*cos(z);
d+=sin(z)*cos(x);
// surface hit test
if (d>1e-6) continue; // skip if too far from surface
_discard=false; // remember that surface was hit
// compute normal
n.x =+cos(x)*cos(y); // partial derivate by x
n.x+=+sin(y)*cos(z);
n.x+=-sin(z)*sin(x);
n.y =-sin(x)*sin(y); // partial derivate by y
n.y+=+cos(y)*cos(z);
n.y+=+sin(z)*cos(x);
n.z =+sin(x)*cos(y); // partial derivate by z
n.z+=-sin(y)*sin(z);
n.z+=+cos(z)*cos(x);
break; // stop raycasting
}
// skip rendering if no hit with surface (hole)
if (_discard) discard;
// directional lighting
n=normalize(n);
i=abs(dot(l,n));
// ambient + directional lighting
i=0.3+(0.7*i);
// output fragment (render pixel)
gl_FragDepth=z; // depth (optional)
col=vec3(1.0,1.0,1.0)*i; // color
}
I hope I did not make error in partial derivates. Here result:
[Edit1]
Based on your code I see it like this (X-Ray like Blending)
var mat = EmguExtensions.InitMat(new System.Drawing.Size(2000, 1080));
double zz, dz, d, i, di = 0;
const double scalex = 2.0 * Math.PI / mat.Width;
const double scaley = 2.0 * Math.PI / mat.Height;
const double scalez = 2.0 * Math.PI;
uint layerCount = 100; // layers
for (int y = 0; y < mat.Height; y++)
{
double yy = y * scaley; // y position of pixel
for (int x = 0; x < mat.Width; x++)
{
double xx = x * scalex; // x position of pixel
dz = 2.0 * scalez / layerCount; // z step
di = 1.0 / layerCount; // color increment
i = 0.0; // color intensity
for (zz = -scalez; zz <= scalez; zz += dz) // do all layers
{
d = Math.Sin(xx) * Math.Cos(yy); // compute gyroid equation
d += Math.Sin(yy) * Math.Cos(zz);
d += Math.Sin(zz) * Math.Cos(xx);
if (d > 1e-6) continue;
i += di; // if near surface add to color
}
i*=255.0;
mat.SetByte(x, y, (byte)(i));
}
}

Confusion about zFar and zNear plane offsets using glm::perspective

I have been using glm to help build a software rasterizer for self education. In my camera class I am using glm::lookat() to create my view matrix and glm::perspective() to create my perspective matrix.
I seem to be getting what I expect for my left, right top and bottom clipping planes. However, I seem to be either doing something wrong for my near/far planes of there is an error in my understanding. I have reached a point in which my "google-fu" has failed me.
Operating under the assumption that I am correctly extracting clip planes from my glm::perspective matrix, and using the general plane equation:
aX+bY+cZ+d = 0
I am getting strange d or "offset" values for my zNear and zFar planes.
It is my understanding that the d value is the value of which I would be shifting/translatin the point P0 of a plane along the normal vector.
They are 0.200200200 and -0.200200200 respectively. However, my normals are correct orientated at +1.0f and -1.f along the z-axis as expected for a plane perpendicular to my z basis vector.
So when testing a point such as the (0, 0, -5) world space against these planes, it is transformed by my view matrix to:
(0, 0, 5.81181192)
so testing it against these plane in a clip chain, said example vertex would be culled.
Here is the start of a camera class establishing the relevant matrices:
static constexpr glm::vec3 UPvec(0.f, 1.f, 0.f);
static constexpr auto zFar = 100.f;
static constexpr auto zNear = 0.1f;
Camera::Camera(glm::vec3 eye, glm::vec3 center, float fovY, float w, float h) :
viewMatrix{ glm::lookAt(eye, center, UPvec) },
perspectiveMatrix{ glm::perspective(glm::radians<float>(fovY), w/h, zNear, zFar) },
frustumLeftPlane {setPlane(0, 1)},
frustumRighPlane {setPlane(0, 0)},
frustumBottomPlane {setPlane(1, 1)},
frustumTopPlane {setPlane(1, 0)},
frstumNearPlane {setPlane(2, 0)},
frustumFarPlane {setPlane(2, 1)},
The frustum objects are based off the following struct:
struct Plane
{
glm::vec4 normal;
float offset;
};
I have extracted the 6 clipping planes from the perspective matrix as below:
Plane Camera::setPlane(const int& row, const bool& sign)
{
float temp[4]{};
Plane plane{};
if (sign == 0)
{
for (int i = 0; i < 4; ++i)
{
temp[i] = perspectiveMatrix[i][3] + perspectiveMatrix[i][row];
}
}
else
{
for (int i = 0; i < 4; ++i)
{
temp[i] = perspectiveMatrix[i][3] - perspectiveMatrix[i][row];
}
}
plane.normal.x = temp[0];
plane.normal.y = temp[1];
plane.normal.z = temp[2];
plane.normal.w = 0.f;
plane.offset = temp[3];
plane.normal = glm::normalize(plane.normal);
return plane;
}
Any help would be appreciated, as now I am at a loss.
Many thanks.
The d parameter of a plane equation describes how much the plane is offset from the origin along the plane normal. This also takes into account the length of the normal.
One can't just normalize the normal without also adjusting the d parameter since normalizing changes the length of the normal. If you want to normalize a plane equation then you also have to apply the division step to the d coordinate:
float normalLength = sqrt(temp[0] * temp[0] + temp[1] * temp[1] + temp[2] * temp[2]);
plane.normal.x = temp[0] / normalLength;
plane.normal.y = temp[1] / normalLength;
plane.normal.z = temp[2] / normalLength;
plane.normal.w = 0.f;
plane.offset = temp[3] / normalLength;
Side note 1: Usually, one would store the offset of a plane equation in the w-coordinate of a vec4 instead of a separate variable. The reason is that the typical operation you perform with it is a point to plane distance check like dist = n * x - d (for a given point x, normal n, offset d, * is dot product), which can then be written as dist = [n, d] * [x, -1].
Side note 2: Most software and also hardware rasterizer perform clipping after the projection step since it's cheaper and easier to implement.

Processing - creating circles from current pixels

I'm using processing, and I'm trying to create a circle from the pixels i have on my display.
I managed to pull the pixels on screen and create a growing circle from them.
However i'm looking for something much more sophisticated, I want to make it seem as if the pixels on the display are moving from their current location and forming a turning circle or something like this.
This is what i have for now:
int c = 0;
int radius = 30;
allPixels = removeBlackP();
void draw {
loadPixels();
for (int alpha = 0; alpha < 360; alpha++)
{
float xf = 350 + radius*cos(alpha);
float yf = 350 + radius*sin(alpha);
int x = (int) xf;
int y = (int) yf;
if (radius > 200) {radius =30;break;}
if (c> allPixels.length) {c= 0;}
pixels[y*700 +x] = allPixels[c];
updatePixels();
}
radius++;
c++;
}
the function removeBlackP return an array with all the pixels except for the black ones.
This code works for me. There is an issue that the circle only has the numbers as int so it seems like some pixels inside the circle won't fill, i can live with that. I'm looking for something a bit more complex like I explained.
Thanks!
Fill all pixels of scanlines belonging to the circle. Using this approach, you will paint all places inside the circle. For every line calculate start coordinate (end one is symmetric). Pseudocode:
for y = center_y - radius; y <= center_y + radius; y++
dx = Sqrt(radius * radius - y * y)
for x = center_x - dx; x <= center_x + dx; x++
fill a[y, x]
When you find places for all pixels, you can make correlation between initial pixels places and calculated ones and move them step-by-step.
For example, if initial coordinates relative to center point for k-th pixel are (x0, y0) and final coordinates are (x1,y1), and you want to make M steps, moving pixel by spiral, calculate intermediate coordinates:
calc values once:
r0 = Sqrt(x0*x0 + y0*y0) //Math.Hypot if available
r1 = Sqrt(x1*x1 + y1*y1)
fi0 = Math.Atan2(y0, x0)
fi1 = Math.Atan2(y1, x1)
if fi1 < fi0 then
fi1 = fi1 + 2 * Pi;
for i = 1; i <=M ; i++
x = (r0 + i / M * (r1 - r0)) * Cos(fi0 + i / M * (fi1 - fi0))
y = (r0 + i / M * (r1 - r0)) * Sin(fi0 + i / M * (fi1 - fi0))
shift by center coordinates
The way you go about drawing circles in Processing looks a little convoluted.
The simplest way is to use the ellipse() function, no pixels involved though:
If you do need to draw an ellipse and use pixels, you can make use of PGraphics which is similar to using a separate buffer/"layer" to draw into using Processing drawing commands but it also has pixels[] you can access.
Let's say you want to draw a low-res pixel circle circle, you can create a small PGraphics, disable smoothing, draw the circle, then render the circle at a higher resolution. The only catch is these drawing commands must be placed within beginDraw()/endDraw() calls:
PGraphics buffer;
void setup(){
//disable sketch's aliasing
noSmooth();
buffer = createGraphics(25,25);
buffer.beginDraw();
//disable buffer's aliasing
buffer.noSmooth();
buffer.noFill();
buffer.stroke(255);
buffer.endDraw();
}
void draw(){
background(255);
//draw small circle
float circleSize = map(sin(frameCount * .01),-1.0,1.0,0.0,20.0);
buffer.beginDraw();
buffer.background(0);
buffer.ellipse(buffer.width / 2,buffer.height / 2, circleSize,circleSize);
buffer.endDraw();
//render small circle at higher resolution (blocky - no aliasing)
image(buffer,0,0,width,height);
}
If you want to manually draw a circle using pixels[] you are on the right using the polar to cartesian conversion formula (x = cos(angle) * radius, y = sin(angle) * radius).Even though it's focusing on drawing a radial gradient, you can find an example of drawing a circle(a lot actually) using pixels in this answer

Calculate ellipse size in relation to distance from center point

I want to achieve a slow fade in size on every collapse into itself. In other words, when the circle is at its biggest, the ellipses will be at the largest in size and conversely the opposite for the retraction. So far I am trying to achieve this affect by remapping the cSize from the distance of the center point, but somewhere along the way something is going wrong. At the moment I am getting a slow transition from small to large in ellipse size, but the inner ellipses are noticeably larger. I want an equal distribution of size amongst all ellipses in relation to center point distance.
I've simplified the code down to 4 ellipses rather than an array of rows of ellipses in order to hopefully simplify this example. This is done in the for (int x = -50; x <= 50; x+=100).
I've seen one or two examples that slightly does what I want, but is more or less static. This example is kind of similar because the ellipse size gets smaller or larger in relation to the mouse position
Distance2D
Here is an additional diagram of the grid of ellipses I am trying to create, In addition, I am trying to scale that "square grid" of ellipses by a center point.
Multiple ellipses + Scale by center
Any pointers?
float cSize;
float shrinkOrGrow;
void setup() {
size(640, 640);
noStroke();
smooth();
fill(255);
}
void draw() {
background(#202020);
translate(width/2, height/2);
if (cSize > 10) {
shrinkOrGrow = 0;
} else if (cSize < 1 ) {
shrinkOrGrow = 1;
}
if (shrinkOrGrow == 1) {
cSize += .1;
} else if (shrinkOrGrow == 0) {
cSize -= .1;
}
for (int x = -50; x <= 50; x+=100) {
for (int y = -50; y <= 50; y+=100) {
float d = dist(x, y, 0, 0);
float fromCenter = map(cSize, 0, d, 1, 10);
pushMatrix();
translate(x, y);
rotate(radians(d + frameCount));
ellipse(x, y, fromCenter, fromCenter);
popMatrix();
}
}
}
The values you're passing into the map() function don't make a lot of sense to me:
float fromCenter = map(cSize, 0, d, 1, 100);
The cSize variable bounces from 1 to 10 independent of anything else. The d variable is the distance of each ellipse to the center of the circle, but that's going to be static for each one since you're using the rotate() function to "move" the circle, which never actually moves. That's based only on the frameCount variable, which you never use to calculate the size of your ellipses.
In other words, the position of the ellipses and their size are completely unrelated in your code.
You need to refactor your code so that the size is based on the distance. I see two main options for doing this:
Option 1: Right now you're moving the circles on screen using the translate() and rotate() functions. You could think of this as the camera moving, not the ellipses moving. So if you want to base the size of the ellipse on its distance from some point, you have to get the distance of the transformed point, not the original point.
Luckily, Processing gives you the screenX() and screenY() functions for figuring out where a point will be after you transform it.
Here's an example of how you might use it:
for (int x = -50; x <= 50; x+=100) {
for (int y = -50; y <= 50; y+=100) {
pushMatrix();
//transform the point
//in other words, move the camera
translate(x, y);
rotate(radians(frameCount));
//get the position of the transformed point on the screen
float screenX = screenX(x, y);
float screenY = screenY(x, y);
//get the distance of that position from the center
float distanceFromCenter = dist(screenX, screenY, width/2, height/2);
//use that distance to create a diameter
float diameter = 141 - distanceFromCenter;
//draw the ellipse using that diameter
ellipse(x, y, diameter, diameter);
popMatrix();
}
}
Option 2: Stop using translate() and rotate(), and use the positions of the ellipses directly.
You might create a class that encapsulates everything you need to move and draw an ellipse. Then just create instances of that class and iterate over them. You'd need some basic trig to figure out the positions, but you could then use them directly.
Here's a little example of doing it that way:
ArrayList<RotatingEllipse> ellipses = new ArrayList<RotatingEllipse>();
void setup() {
size(500, 500);
ellipses.add(new RotatingEllipse(width*.25, height*.25));
ellipses.add(new RotatingEllipse(width*.75, height*.25));
ellipses.add(new RotatingEllipse(width*.75, height*.75));
ellipses.add(new RotatingEllipse(width*.25, height*.75));
}
void draw() {
background(0);
for (RotatingEllipse e : ellipses) {
e.stepAndDraw();
}
}
void mouseClicked() {
ellipses.add(new RotatingEllipse(mouseX, mouseY));
}
void mouseDragged() {
ellipses.add(new RotatingEllipse(mouseX, mouseY));
}
class RotatingEllipse {
float rotateAroundX;
float rotateAroundY;
float distanceFromRotatingPoint;
float angle;
public RotatingEllipse(float startX, float startY) {
rotateAroundX = (width/2 + startX)/2;
rotateAroundY = (height/2 + startY)/2;
distanceFromRotatingPoint = dist(startX, startY, rotateAroundX, rotateAroundY);
angle = atan2(startY-height/2, startX-width/2);
}
public void stepAndDraw() {
angle += PI/64;
float x = rotateAroundX + cos(angle)*distanceFromRotatingPoint;
float y = rotateAroundY + sin(angle)*distanceFromRotatingPoint;
float distance = dist(x, y, width/2, height/2);
float diameter = 50*(500-distance)/500;
ellipse(x, y, diameter, diameter);
}
}
Try clicking or dragging in this example. User interaction makes more sense to me using this approach, but which option you choose really depends on what fits inside your head the best.

Draw a sphere using 3D pixels (voxels)

Can you suggest an algorithm that can draw a sphere in 3D space using only the basic plot(x,y,z) primitive (which would draw a single voxel)?
I was hoping for something similar to Bresenham's circle algorithm, but for 3D instead of 2D.
FYI, I'm working on a hardware project that is a low-res 3D display using a 3-dimensional matrix of LEDs, so I need to actually draw a sphere, not just a 2D projection (i.e. circle).
The project is very similar to this:
... or see it in action here.
One possibility I have in mind is this:
calculate the Y coordinates of the poles (given the radius) (for a sphere centered in the origin, these would be -r and +r)
slice the sphere: for each horizontal plane pi between these coordinates, calculate the radius of the circle obtained by intersecting said plane with the sphere => ri.
draw the actual circle of radius ri on plane pi using Bresenham's algorithm.
FWIW, I'm using a .NET micro-framework microprocessor, so programming is C#, but I don't need answers to be in C#.
The simple, brute force method is to loop over every voxel in the grid and calculate its distance from the sphere center. Then color the voxel if its distance is less than the sphere radius. You can save a lot of instructions by eliminating the square root and comparing the dot product to the radius squared.
Pretty far from optimal, sure. But on an 8x8x8 grid as shown, you'll need to do this operation 512 times per sphere. If the sphere center is on the grid, and its radius is an integer, you only need integer math. The dot product is 3 multiplies and 2 adds. Multiplies are slow; let's say they take 4 instructions each. Plus you need a comparison. Add in the loads and stores, let's say it costs 20 instructions per voxel. That's 10240 instructions per sphere.
An Arduino running at 16 MHz could push 1562 spheres per second. Unless you're doing tons of other math and I/O, this algorithm should be good enough.
I don't believe running the midpoint circle algorithm on each layer will give the desired results once you reach the poles, as you will have gaps in the surface where LEDs are not lit. This may give the result you want, however, so that would be up to aesthetics. This post is based on using the midpoint circle algorithm to determine the radius of the layers through the middle two vertical octants, and then when drawing each of those circles also setting the points for the polar octants.
I think based on #Nick Udall's comment and answer here using the circle algorithm to determine radius of your horizontal slice will work with a modification I proposed in a comment on his answer. The circle algorithm should be modified to take as an input an initial error, and also draw the additional points for the polar octants.
Draw the standard circle algorithm points at y0 + y1 and y0 - y1: x0 +/- x, z0 +/- z, y0 +/- y1, x0 +/- z, z0 +/- x, y0 +/- y1, total 16 points. This forms the bulk of the vertical of the sphere.
Additionally draw the points x0 +/- y1, z0 +/- x, y0 +/- z and x0 +/- x, z0 +/- y1, y0 +/- z, total 16 points, which will form the polar caps for the sphere.
By passing the outer algorithm's error into the circle algorithm, it will allow for sub-voxel adjustment of each layer's circle. Without passing the error into the inner algorithm, the equator of the circle will be approximated to a cylinder, and each approximated sphere face on the x, y, and z axes will form a square. With the error included, each face given a large enough radius will be approximated as a filled circle.
The following code is modified from Wikipedia's Midpoint circle algorithm. The DrawCircle algorithm has the nomenclature changed to operate in the xz-plane, addition of the third initial point y0, the y offset y1, and initial error error0. DrawSphere was modified from the same function to take the third initial point y0 and calls DrawCircle rather than DrawPixel
public static void DrawCircle(int x0, int y0, int z0, int y1, int radius, int error0)
{
int x = radius, z = 0;
int radiusError = error0; // Initial error state passed in, NOT 1-x
while(x >= z)
{
// draw the 32 points here.
z++;
if(radiusError<0)
{
radiusError+=2*z+1;
}
else
{
x--;
radiusError+=2*(z-x+1);
}
}
}
public static void DrawSphere(int x0, int y0, int z0, int radius)
{
int x = radius, y = 0;
int radiusError = 1-x;
while(x >= y)
{
// pass in base point (x0,y0,z0), this algorithm's y as y1,
// this algorithm's x as the radius, and pass along radius error.
DrawCircle(x0, y0, z0, y, x, radiusError);
y++;
if(radiusError<0)
{
radiusError+=2*y+1;
}
else
{
x--;
radiusError+=2*(y-x+1);
}
}
}
For a sphere of radius 4 (which actually requires 9x9x9), this would run three iterations of the DrawCircle routine, with the first drawing a typical radius 4 circle (three steps), the second drawing a radius 4 circle with initial error of 0 (also three steps), and then the third drawing a radius 3 circle with initial error 0 (also three steps). That ends up being nine calculated points, drawing 32 pixels each.
That makes 32 (points per circle) x 3 (add or subtract operations per point) + 6 (add, subtract, shift operations per iteration) = 102 add, subtract, or shift operations per calculated point. In this example, that's 3 points for each circle = 306 operations per layer. The radius algorithm also adds 6 operations per layer and iterates 3 times, so 306 + 6 * 3 = 936 basic arithmetic operations for the example radius of 4.
The cost here is that you will repeatedly set some pixels without additional condition checks (i.e. x = 0, y = 0, or z = 0), so if your I/O is slow you may be better off adding the condition checks. Assuming all LEDs were cleared at the start, the example circle would set 288 LEDs, while there are many fewer LEDs that would actually be lit due to repeat sets.
It looks like this would perform better than the bruteforce method for all spheres that would fit in the 8x8x8 grid, but the bruteforce method would have consistent timing regardless of radius, while this method will slow down when drawing large radius spheres where only part will be displayed. As the display cube increases in resolution, however, this algorithm timing will stay consistent while bruteforce will increase.
Assuming that you already have a plot function like you said:
public static void DrawSphere(double r, int lats, int longs)
{
int i, j;
for (i = 0; i <= lats; i++)
{
double lat0 = Math.PI * (-0.5 + (double)(i - 1) / lats);
double z0 = Math.Sin(lat0) * r;
double zr0 = Math.Cos(lat0) * r;
double lat1 = Math.PI * (-0.5 + (double)i / lats);
double z1 = Math.Sin(lat1) * r;
double zr1 = Math.Cos(lat1) * r;
for (j = 0; j <= longs; j++)
{
double lng = 2 * Math.PI * (double)(j - 1) / longs;
double x = Math.Cos(lng);
double y = Math.Sin(lng);
plot(x * zr0, y * zr0, z0);
plot(x * zr1, y * zr1, z1);
}
}
}
That function should plot a sphere at the origin with specified latitude and longitude resolution (judging by your cube you probably want something around 40 or 50 as a rough guess). This algorithm doesn't "fill" the sphere though, so it will only provide an outline, but playing with the radius should let you fill the interior, probably with decreasing resolution of the lats and longs along the way.
Just found an old q&a about generating a Sphere Mesh, but the top answer actually gives you a short piece of pseudo-code to generate your X, Y and Z :
(x, y, z) = (sin(Pi * m/M) cos(2Pi * n/N), sin(Pi * m/M) sin(2Pi * n/N), cos(Pi * m/M))
Check this Q&A for details :)
procedurally generate a sphere mesh
My solution uses floating point math instead of integer math not ideal but it works.
private static void DrawSphere(float radius, int posX, int poxY, int posZ)
{
// determines how far apart the pixels are
float density = 1;
for (float i = 0; i < 90; i += density)
{
float x1 = radius * Math.Cos(i * Math.PI / 180);
float y1 = radius * Math.Sin(i * Math.PI / 180);
for (float j = 0; j < 45; j += density)
{
float x2 = x1 * Math.Cos(j * Math.PI / 180);
float y2 = x1 * Math.Sin(j * Math.PI / 180);
int x = (int)Math.Round(x2) + posX;
int y = (int)Math.Round(y1) + posY;
int z = (int)Math.Round(y2) + posZ;
DrawPixel(x, y, z);
DrawPixel(x, y, -z);
DrawPixel(-x, y, z);
DrawPixel(-x, y, -z);
DrawPixel(z, y, x);
DrawPixel(z, y, -x);
DrawPixel(-z, y, x);
DrawPixel(-z, y, -x);
DrawPixel(x, -y, z);
DrawPixel(x, -y, -z);
DrawPixel(-x, -y, z);
DrawPixel(-x, -y, -z);
DrawPixel(z, -y, x);
DrawPixel(z, -y, -x);
DrawPixel(-z, -y, x);
DrawPixel(-z, -y, -x);
}
}
}

Resources