Scaling and stretching SVG to four arbitrary corners - matrix

I have an SVG image (a site plan) with width w, height h, which I would like to view on a map background.
To superimpose it at the right place on the map, I would like to stretch it to four arbitrary corners: x1,y1, x2,y2, x3,y3, x4,y4.
I figure I might be able to do this with a combination of SVG transforms (scale, rotate, skew, translate) but my maths is nowhere near up to the job. Any clues?

Rotation and translation can describe Euclidean (i.e. length-preserving) transformations. With isotropic scaling added in you obtain similarity transformations, and with anisotropic scaling and skewing you even get affine transformations. So that is the kind of transformation your operations can express.
But an affine transformation is already uniquely defined by three points and their images. Which means the fourth corner will end up in a location determined by the other three. To arbitrarily position four corners, you need a projective transformation.
See also this post about hwo to compute a projective transformation, how to apply it, and how to use it in JavaScript, if the browser supports projective transformations in 3D.

Related

Stroke width transform alternative

I've thought about using the following method to determine stroke width:
compute the distance of a pixel from the edge of the shape (bwdist in Matlab)
compute the skeleton of the blob using Lam-Lee's skeletonization algorithm (bwmorph('thin',inf) in Matlab)
measure the stroke width on the distance transformed image where the skeleton passes.
The skeleton is supposed to pass through the local maxima of the distance transform, If I'm not mistaken.
This seems more straightforward to me than "flood-filling" the shape according to the gradients of the distance transform, and it also gives equal weight to thin strokes and to wide strokes.
Is this a good approach, or did I make a mistake in my reasoning?
Why doesn't anyone use this approach?

OpenGL ES GL_TRIANGLES gradient issue

I am trying to draw a area graph with a gradient. This is what I have right now.
If you look at the red-green graph, you will notice the gradient is does not look the way its supposed to.
EDIT: The gradient should be uniform like this:
I am using OpenGL ES 2.0 and GLKit to draw a bunch of charts. The chart is drawn using GL_TRIANGLES. I understand that the issue is that the gradient is being drawn for each triangle individually.
The only approach I can think of is to use a stencil buffer. I will draw the gradient in a big rectangle and clip it to this shape using the stencil. Is there a better way to do this? If not could you help me draw a stencil with specified points? I am new to OpenGL and not getting a good explanation on using stencil buffer.
You don't need a stencil buffer. I don't think more triangles will help, either — more likely that'd just cause you more confusion because you'd be assigning per-vertex colors to intermediate vertices and having to interpolate them yourself.
Your gradients are coming out that way because of how and where you assign vertex colors for interpolation. Notice the difference in colors between your output and the example of what you're looking for:
You've got 100% red at every vertex along the top edge of your graph, and 100% green at every vertex along the bottom edge. OpenGL interpolates colors linearly across the face of each triangle, which is why you've got more red in the shorter parts of your graph.
In the output you're looking for, the top of the graph starts out less red in the shorter parts, so that it makes a shorter transition to white in over shorter distance.
There are a few different ways to do this, but probably the easiest (for your plan of using GLKBaseEffect instead of writing your own shaders) might be to use a 1D texture for your gradient, and assign a texture coordinate to each vertex that's proportional to its Y coordinate on the graph, like so:
(The example coordinates in my diagram assume your graph vertices cover the range 0.0 to 1.0, but the point stands regardless: the vertical texture coordinate for each point should be a fraction of the graph's total height, between 0.0 and 1.0.)
Alternatively, you could look into drawing in two passes: First, draw the shape of your graph, then draw a quad (two triangles) covering the entire screen with your gradient, using the appropriate glBlendFunc so that it only draws over the area you've filled in with your graph shape.
OpenGL ES can do what you want but you need to increase the tessellation of your model. In other words, instead of using just a few large triangles, you need more and smaller triangles, with the vertex color changes spread over them evenly. This will give you better control over the gradients. Triangles are cheap on accelerated OpenGL ES, so even if you increase the number 100 times, it will not have much impact on performance.
You might also consider a different approach, where the entire graph is covered by a single texture which contains the gradient. That would be easier to implement.

How to get the histogram orientation of a 'one' cell according to Dalal and Triggs?

I am trying to implement the method of Dalal and Triggs. I could implement the first stage compute gradients on an image, and I could create the code who walk across the image in cells, but I don't understand the logic behind this stage.
I know is necessary identify first between a signed (0-360 degrees) or unsigned (0-180 degrees) gradients.
I know I must create a data structure to store each cell histogram, whit n bins. I know what is a histogram, hence I understand I must visit each pixel, but I I don't fully understand about the method for classify each pixel, get the gradient orientation of this pixel and build the histogram with this data.
In short HOG is nothing but a dense representation of gradient orientations weighted by their strengths over a overlapped local neighbourhoods.
You asked what is the significance of finding each pixel gradient orientation. In an image the gradient orientation at each pixel indicates the direction of the boundary(edge between two textures) of the object at that location with respect to X and Y axis. So if you group the orientations of a patch or block or part of an object it represents the distribution of edge directions of object at that region in a very strong way or unique way... Now let us take a simple example, a circle if you plot the gradient orientations of a circle as a histogram you will get a straight line (Don't imagine HOG just a simple plot of gradient orientations) because the orientations of edges of circle ranges from 0 degrees to 360 degrees if u sampled at 360 consecutive locations, For a different object it is different, HOG also do the same thing but in a more sophisticated manner by dividing image into overlapping blocks and dividing each block into cells and making the histogram weighted by the strengths of the local gradients...
Hope it is useful ...

computer vision: extracting info about a shape given a contour (e.g. pointy, round...)

Given the 2D contour of a shape in the form of lines and vertices, how can I Extract Information from that?
like: Pointy, round, straight line.
Shape similarities with a given shape.
Code is not necessary, I am more interested in
concepts and the names of techniques involved to
guide my search....
Thanks in advance.
Image moments
One approach is to calculate the first and second order central moments of the shape described by the 2D contour. Using these values the elongation of the object can be calculated.
The central image moments can be combined to the seven moments of Hu, which are invariant to change in scale, rotation and translation (ie. they are very good for basic shape recognition). (More on image moments here).
Unitless ratio of perimeter and area
An other approach is to calculate the length of the perimeter (p) and the size of the inscribed area (a). Using these two values, the following ratio can be computed:
ratio = p^2 / (4 * pi * a)
The closer this ratio is to one, the more circle like is the described shape.
Other methods
Fourier descriptors
Ratio of shape area and the area of the convex hull of the shape
Another method of contour shape classification is topological aproach based on the "size function" That could be useful for global shape recognition, but not for extracting "local" features like pointy/round/straight.
http://en.wikipedia.org/wiki/Size_function
Basically slicing contour by parametrized line and counting number of connected components depending on parameter.
http://www.ingre.unimo.it/staff/landi/articoli/patrec.pdf
What I think you might be looking for is often called Blob or Connectivity Analysis, which I believe was first developed at SRI (Stanford Research Institute). Image moments are one component of this area.

Resources for image distortion algorithms

Where can I find algorithms for image distortions? There are so much info of Blur and other classic algorithms but so little of more complex ones. In particular, I am interested in swirl effect image distortion algorithm.
I can't find any references, but I can give a basic idea of how distortion effects work.
The key to the distortion is a function which takes two coordinates (x,y) in the distorted image, and transforms them to coordinates (u,v) in the original image. This specifies the inverse function of the distortion, since it takes the distorted image back to the original image
To generate the distorted image, one loops over x and y, calculates the point (u,v) from (x,y) using the inverse distortion function, and sets the colour components at (x,y) to be the same as those at (u,v) in the original image. One ususally uses interpolation (e.g. http://en.wikipedia.org/wiki/Bilinear_interpolation ) to determine the colour at (u,v), since (u,v) usually does not lie exactly on the centre of a pixel, but rather at some fractional point between pixels.
A swirl is essentially a rotation, where the angle of rotation is dependent on the distance from the centre of the image. An example would be:
a = amount of rotation
b = size of effect
angle = a*exp(-(x*x+y*y)/(b*b))
u = cos(angle)*x + sin(angle)*y
v = -sin(angle)*x + cos(angle)*y
Here, I assume for simplicity that the centre of the swirl is at (0,0). The swirl can be put anywhere by subtracting the swirl position coordinates from x and y before the distortion function, and adding them to u and v after it.
There are various swirl effects around: some (like the above) swirl only a localised area, and have the amount of swirl decreasing towards the edge of the image. Others increase the swirling towards the edge of the image. This sort of thing can be done by playing about with the angle= line, e.g.
angle = a*(x*x+y*y)
There is a Java implementation of lot of image filters/effects at Jerry's Java Image Filters. Maybe you can take inspiration from there.
The swirl and others like it are a matrix transformation on the pixel locations. You make a new image and get the color from a position on the image that you get from multiplying the current position by a matrix.
The matrix is dependent on the current position.
here is a good CodeProject showing how to do it
http://www.codeproject.com/KB/GDI-plus/displacementfilters.aspx
there has a new graphic library have many feature
http://code.google.com/p/picasso-graphic/
Take a look at ImageMagick. It's a image conversion and editing toolkit and has interfaces for all popular languages.
The -displace operator can create swirls with the correct displacement map.
If you are for some reason not satisfied with the ImageMagick interface, you can always take a look at the source code of the filters and go from there.

Resources