Drawing splines in varying color segments - winapi

I'm using GDI+ to draw a cardinal spline through a series of points. I wish to draw certain segments in a different color. I've tried drawing the segments individually with multiple Graphics::DrawCurve calls, but this short-circuits the smoothing algorithm and results in the connection between segments being sharp points, rather than curves. I thought about a gradient brush, but I don't want a smooth color gradient, but rather an abrupt color change at certain points. Is this possible without duplicating the spline algorithm and manually drawing point-by-point?

Related

Is there any algorithm that can automatically detect the rows and columns of sprite sheets?

Is there any algorithm that can automatically detect the rows and columns of sprite sheets like the one above? To make it easier for me to know the size and position of each grid for the animation display.
Yes! This is an ideal use case for the Fourier transform. If you squish the image down to 1 pixel height, take just the alpha channel, and do a Fourier transform on it, you will see a frequency peak at a wavelength corresponding to the number of pixels per sprite. You can do the same thing vertically. This will be robust even with closely packed sprites, as long as they have some amount of outline to extract.
If the sprites have irregular width/height but are separated by transparent regions and are in distinct bounding boxes, you can instead pick a random non-transparent pixel and greedily grow a bounding box outward from it until the bounding box has only transient pixels at its outline, and then mark those pixels as “visited” and do the same with an unvisited pixels until you’ve visited all non-transparent pixels.

Painting stroke generation algorithm for robot arm

I am writing a code that generate start and end points of strokes of a picture (Raster images) to let robot arm paint.
I have wrote an algorithm but with too many overlapping strokes:
https://github.com/Evrid/Painting-stroke-generation-for-robot-arm-or-CNC-machine
The input of my algorithm:
and the output (which is mirrored and re-assigned to the colors I have) with 50 ThresholdOfError (you can see the strokes are overlapping):
Things to notice are:
*The strokes needs to be none overlapping (if overlapping then have too many strokes)
*Painting have different colors, the same color better draw together
*The stroke size is like rectangles
*Some coloring area are disconnected, like below only yellow from a sun flower:
I am not sure which algorithm should I use, here is some possible ones I have thought about:
Method 1.Generate 50k (or more) random direction and position large size rectangles, if its area overlap the same color area and not overlapping other rectangles, then keep it, then decrease generated rectangle size and after a couple rounds keep decreasing again
Method 2.Extract certain color first then generate random direction and position large size rectangles (we have less area and calculation time)
Method 3.Do edge detection first, then rectangles are generated with direction along the edge, if its area overlap the same color area and not overlapping other rectangles, then keep it, then decrease generated rectangle size and after a couple rounds keep decreasing again
Method 4: Generate random circle, let the pen draw points instead (but may result too many points)
Any suggestions about which algorithm I should use?
I would start with:
Quantize your image to your palette
so reduce colors to your palette first see:
Effective gif/image color quantization?
Converting BMP image to set of instructions for a plotter?
segmentate your image by similar colors
for this you can use flood fill or growth fill to create labels (region index) in form of ROI
see Fracture detection in hand using image proccessing
for each ROI create infill path with thick brush
this is simple hatching you do this by generating zig zag like path with "big" brush width in major direction of ROI so use either AABB or OBB or PCA to detect major direction (direction with biggest size of ROI) and just AND it with polygon ROI
for each ROI create outline path with "thin" brush
IIRC this is also called contour extraction, simply select boundary pixels of selected ROI
then you can use A* on ROI boundary to sort the pixels into 2 halves (or more if complex shape with holes or thin parts) so backtrack the pixels and then reorder them to form a closed loop(s)
this will preserve details on boundary (while using infill with thick brush)
Something like this:
In case your colors are combinable you can use CMY color space and Substractive color mixing and process each C,M,Y channel separately (max 3 overlapping strokes) to have much better color match.
If you want much better colors you can also add dithering however that will slow down the painting a lot as it requires much much more path segments and its not optimal for plotter with tool up/down movement (they are better for printing heads or printing triggered without additional movements ...). To partially overcome this issue you could use partial dithering where you can specify the amount of dithering created (leading to less segments)
there are a lot of things you can improve/add to this like:
remove outline from ROI (to limit the overlaps and prevent details overpaint)
do all infills first and then all outlines
set infill brush width based on ROI size
adjust infill hatching pattern to better match your arm kinematics
order ROIs so they painted faster (variation of Traveling Sailsman problem TSP)
infill with more than just one brush width to preserve details near borders
Suggest you use the flood fill algorithm.
Start at top right pixel.
Flood fill that pixel color. https://en.wikipedia.org/wiki/Flood_fill
Fit rectangles into the filled area.
Move onto the next pixel that is not in the filled area.
When the entire picture has been covered, sort the rectangles by color.

Matlab Image Segmentation and Circle Identification

I am working on image segmentation, edge detection, and opening and closing by reconstruction in Matlab. I am trying to identify circular objects in a very noisy image with the aim of creating a mask with the edges of these circular objects and then superimposing such mask in the original image. After applying opening and closing by reconstruction along with the watershed function to identify objects' boundaries and a binary mask of the original image, I am able to get edges corresponding to full and half circles. Although the full circles identified are very few and I mostly get half circles, this method filters out most of the noise from the image.
Trying to get the edges of full circles, I used the canny function for edge detection. This function gets the complete edges of the majority of the circular objects, but it also draws the edges of the noise in the image. This doesn't allow me to create a good mask to superimpose in the original image.
The question then is if there is any efficient method to get rid of the noise picked up by the canny function or if it is possible to do canny function edge detection on objects of certain radius only as the circular objects that I want to identify have an specific radius. Attached is the original image what causes the noise in the image are the dark vertical bands or shadows and the bright beams of light on top of the circles. P.S. The matlab function "imfindcircles" for circle detection does not work on my image because of the broken circular edges or the background noise.
Original image of circular objects and dark vertical lines and bright spots as noise
You can pre-process the given image before applying Hough transform. The problem you are getting is because of uneven distribution of brightness across the image. You can apply some filtering techniques like homomorphic filtering before edge detection and Hough transform. Homomorphic filtering technique normalizes the brightness across an image and increases contrast. Once you apply canny edge detection on this image, you can use some edge linking algorithm to fill the gaps between detected edges to get better performance using Hough transform.
The process goes like this,
image --> homomorphic filtering --> canny edge detection --> edge linking --> Hough transform

OpenGL ES GL_TRIANGLES gradient issue

I am trying to draw a area graph with a gradient. This is what I have right now.
If you look at the red-green graph, you will notice the gradient is does not look the way its supposed to.
EDIT: The gradient should be uniform like this:
I am using OpenGL ES 2.0 and GLKit to draw a bunch of charts. The chart is drawn using GL_TRIANGLES. I understand that the issue is that the gradient is being drawn for each triangle individually.
The only approach I can think of is to use a stencil buffer. I will draw the gradient in a big rectangle and clip it to this shape using the stencil. Is there a better way to do this? If not could you help me draw a stencil with specified points? I am new to OpenGL and not getting a good explanation on using stencil buffer.
You don't need a stencil buffer. I don't think more triangles will help, either — more likely that'd just cause you more confusion because you'd be assigning per-vertex colors to intermediate vertices and having to interpolate them yourself.
Your gradients are coming out that way because of how and where you assign vertex colors for interpolation. Notice the difference in colors between your output and the example of what you're looking for:
You've got 100% red at every vertex along the top edge of your graph, and 100% green at every vertex along the bottom edge. OpenGL interpolates colors linearly across the face of each triangle, which is why you've got more red in the shorter parts of your graph.
In the output you're looking for, the top of the graph starts out less red in the shorter parts, so that it makes a shorter transition to white in over shorter distance.
There are a few different ways to do this, but probably the easiest (for your plan of using GLKBaseEffect instead of writing your own shaders) might be to use a 1D texture for your gradient, and assign a texture coordinate to each vertex that's proportional to its Y coordinate on the graph, like so:
(The example coordinates in my diagram assume your graph vertices cover the range 0.0 to 1.0, but the point stands regardless: the vertical texture coordinate for each point should be a fraction of the graph's total height, between 0.0 and 1.0.)
Alternatively, you could look into drawing in two passes: First, draw the shape of your graph, then draw a quad (two triangles) covering the entire screen with your gradient, using the appropriate glBlendFunc so that it only draws over the area you've filled in with your graph shape.
OpenGL ES can do what you want but you need to increase the tessellation of your model. In other words, instead of using just a few large triangles, you need more and smaller triangles, with the vertex color changes spread over them evenly. This will give you better control over the gradients. Triangles are cheap on accelerated OpenGL ES, so even if you increase the number 100 times, it will not have much impact on performance.
You might also consider a different approach, where the entire graph is covered by a single texture which contains the gradient. That would be easier to implement.

Rendering curves of TTF fonts

Could you point me to an effective algorithm for rendering and filling the curves used in TTF fonts? I have the data loaded as contours of points so I;m only wondering about an effective way of drawing the curves. I'd also very much like it to support smoothing.
What I know up to this point:
TTF uses bezier curves and splines
TTF categorizes it's points as points defining lines, and points defining curve, the latter being either on the curve in question or our of it (control points)
One can make a polygon out of a curve contour where the curved parts are made of lines the size of a pixel.
One could use this polygon to render the filled contour and if one also uses the data as floats rather than ints, one could achieve font smoothing.
So could you point me to a guide of some sort or something?
Thanks.
If you already have the vector data, then you have to rasterize it with some scanline fill algorithm. For smoothing divide the pixels into n by n blocks, rasterize the characters and compute the a gray value corresponding to the number of filled subpixels. Handling bezier curves and splines, however, is not going to be easy, I think. If it is possible, I would use a library like freetype or similar.

Resources