How are filled paths rendered? - algorithm

What are the standard algorithms used in vector graphics for rendering filled paths?
I'm not only interested in the process of rendering strokes, I also would like to know how the shapes are filled - how it is determined if given point is inside or outside the path (I believe even specifying the rules of what inside and outside mean is not a straightforward thing).

find outline (perimeter as polygon)
this I think you already have
triangulate (or cut to convex polygons)
there are many approaches like:
clip ear
Delaunay
see Wiki Polygon triangulation
fill convex triangles/polygons
this is easy either use
gfx lib like OpenGL,DirectX,...
api like GDI
rasterize on your own like in how to rasterize convex polygons
style
This stuff is more complicated then it sound like at the first hear. For:
outline width pen,stroke
convert outline to polygon by shifting it out or in. For more info see this
outline style pen,stroke
full,dash dot,dot dot,... for more info see this
filling style brush
like hatching which is most complicated from all. It involves heavy polygon tweaking similar but much harder then outline width. Some styles are simpler some extremly complicated for example for equidistant line fill simple loop + intersection + inside polygon test will do. To test polygon inside you can use hit test

Related

Is there any way to implement this beautiful image effect?

Recently I found an amazing APP called Photo Lab,and I'm curious about one effect called Paper Rose.In the pictures below,one is the original picture,the other is the effected picture.My question is what kind of algorithm can do this effect,and it would be better if you can show me some code or demo.Thanks in advance!
enter image description here
enter image description here
I am afraid that this is not just an algorithm, but a complex piece of software.
The most difficult part is to model the shape of the rose. The petals are probably a meshed surface. It is not so difficult to give them a curved shape, but the hard issue is to group them in such a way that they do not intersect.
It is not quite impossible that this can be achieved by first putting them in a flat geometry where you can master intersections, then to wrap it around an axis with a king of polar transform. But I don't really believe in that. I rather think that they have a collision-avoiding geometric modeller.
The next steps, which are more classical, are to texture-map the pictures onto the petals and to perform the realistic rendering of the whole scene.
But there's another option, which I'll call the "poor man's rendering".
You can start from a real picture of a paper rose, where the petals have an empty black, thick frame. Then on the picture, you detect (either in some automated way or just by hand) points that correspond to a regular grid on the flattened paper.
As the petals are not wholly visible, the hidden parts must be clipped out from the mesh, possibly by using a polygonal fence.
Now you can take any picture, fit it over the undistorted mesh, clip out the hidden areas and warp to the distorted position. Then by compositing tricks, you will give it a natural shaded appearance on the rose.
Note: the process is eased by drawing a complet grid inside the frame. Anyway, you will need to somehow erase it before doing the compositing, in order to retrieve just the shading information.
I would tend to believe that the second approach was used here, as I see a few mapping anomalies along some edges, which would not arise on a fully synthetic scene.
In any case, hard work.

silhouette rendering with webgl / opengl

I've been trying to render silhouettes on CAD models with webgl. The closest i got to the desired result was with fwidth and a dot between the normal and the eye vector. I found it difficult to control the width though.
I saw another web based viewer and it's capable of doing something like this:
I started digging through the shaders, and the most i could figure out is that this is analytical - an actual line entity is drawn and that the width is achieved by rendering a quad instead of default webgl lines. There is a bunch of logic in the shader and my best guess is that the vertex positions are simply updated on every render.
This is a procedural model, so i guess that for cones and cylinders, two lines can always be allocated, silhouette points computed, and the lines updated.
If that is the case, would it be a good idea to try and do something like this in the shader (maybe it's already happening and i didn't understand it). I can see a cylinder being written to attributes or uniforms and the points computed.
Is there an approach like this already documented somewhere?
edit 8/15/17
I have not found any papers or documented techniques about this. But it got a couple of votes.
Given that i do have information about cylinders and cones, my idea is to sample the normal of that parametric surface from the vertex, push the surface out by some factor that would cover some amount of pixels in screen space, stencil it, and draw a thick line thus clipping it with the actual shape of the surface.
The traditional shader-based method is Gooch shading. The original paper is here:
http://artis.imag.fr/~Cyril.Soler/DEA/NonPhotoRealisticRendering/Papers/p447-gooch.pdf
The old fashing OpenGL technique from Jeff Lander

Is there a common technique for drawing a "stretchy" line

I'm trying to figure out how to draw an stretchy/elastic line between two points in openGL/Cocos2d on iPhone. Something like this
Where the "band" get's thinner as the line gets longer. iOS uses the same technique I'm aiming for in the Mail.app, pull to refresh.
First of all, is there a name for this kind of thing?
My first thought was to plot a point on the radius of the starting and ending circles based on the angle between to the two, and draw a quadratic bezier curve using the distance/2 as a control point. But I'm not a maths whizz so I'm struggling to figure out how to place the control point which will adjust the thickness of the path.
But a bigger problem is that I need to fill the shape with a colour, and that doesn't seem to be possible with OpenGL bezier curves as far as I can tell since curves don't seem to form part of a shape that can be filled.
So I looked at using a spline created using a point array, but that opens up a whole new world of mathematical pain as I'd have to figure out where all the points along the edge of the path are.
So before I go down that rabbit hole, I'm wondering wether there's something simpler that I'm overlooking, or if anyone can point me towards the most effective technique.
I'm not sure about a "common" technique that people use, other than calculating it mathematically, but this project, SlimeyRefresh, is a good example of how to accomplish this.

Drawing simple shapes or using sprites with OpenGL

I want to create a simple shape, let's say, a circle, it might have transparency, colors, etc. but it's still a simple circle.
In every tutorial I see, people use sprites. I am not sure what should I use for my case.
Should I use a sprite with a circle or should I try and draw the shape myself?
What are the advantages of each method?
Is there a line dividing them or is it just experience to know which one to use?
GPU geometry is composed of triangles or line segments so it'll be inefficient to draw a circle in this way, it'll require too many triangles for it to look smooth.
The two more efficient ways to do that are:
Use a sprite
Use a shader and draw the circle. Check ShaderToy, more specifically the "Shapes" preset.

Recommend some Bresenham's-like algorithm of sphere mapping in 2D?

I need the fastest sphere mapping algorithm. Something like Bresenham's line drawing one.
Something like the implementation that I saw in Star Control 2 (rotating planets).
Are there any already invented and/or implemented techniques for this?
I really don't want to reinvent the bicycle. Please, help...
Description of the problem.
I have a place on the 2D surface where the sphere has to appear. Sphere (let it be an Earth) has to be textured with fine map and has to have an ability to scale and rotate freely. I want to implement it with a map or some simple transformation function of coordinates: each pixel on the 2D image of the sphere is defined as a number of pixels from the cylindrical map of the sphere. This gives me an ability to implement the antialiasing of the resulting image. Also I think about using mipmaps to implement mapping if one pixel on resulting picture is corresponding to more than one pixel on the original map (for example, close to poles of the sphere). Deeply inside I feel that this can be implemented with some trivial math. But all these thoughts are just my thoughts.
This question is a little bit related to this one: Textured spheres without strong distortion, but there were no answers available on my question.
UPD: I suppose that I have no hardware support. I want to have an cross-platform solution.
The standard way to do this kind of mapping is a cube map: the sphere is projected onto the 6 sides of a cube. Modern graphics cards support this kind of texture at the hardware level, including full texture filtering; I believe mipmapping is also supported.
An alternative method (which is not explicitly supported by hardware, but which can be implemented with reasonable performance by procedural shaders) is parabolic mapping, which projects the sphere onto two opposing parabolas (each of which is mapped to a circle in the middle of a square texture). The parabolic projection is not a projective transformation, so you'll need to handle the math "by hand".
In both cases, the distortion is strictly limited. Due to the hardware support, I recommend the cube map.
There is a nice new way to do this: HEALPix.
Advantages over any other mapping:
The bitmap can be divided into equal parts (very little distortion)
Very simple, recursive geometry of the sphere with arbitrary precision.
Example image.
Did you take a look at Jim Blinn's articles "How to draw a sphere" ? I do not have access to the full articles, but it looks like what you need.
I'm a big fan of StarconII, but unfortunately I don't remember the details of what the planet drawing looked like...
The first option is triangulating the sphere and drawing it with standard 3D polygons. This has definite weaknesses as far as versimilitude is concerned, but it uses the available hardware acceleration and can be made to look reasonably good.
If you want to roll your own, you can rasterize it yourself. Foley, van Dam et al's Computer Graphics -- Principles and Practice has a chapter on Bresenham-style algorithms; you want the section on "Scan Converting Ellipses".
For the point cloud idea I suggested in earlier comments: you could avoid runtime parameterization questions by preselecting and storing the (x,y,z) coordinates of surface points instead of a 2D map. I was thinking of partially randomizing the point locations on the sphere, so that they wouldn't cause structured aliasing when transformed (forwards, backwards, whatever 8^) onto the screen. On the downside, you'd have to deal with the "fill" factor -- summing up the colors as you draw them, and dividing by the number of points. Er, also, you'd have the problem of what to do if there are no points; e.g., if you want to zoom in with extreme magnification, you'll need to do something like look for the nearest point in that case.

Resources