The distance transform provides the distance of each pixel from the nearest boundary/contour/background pixel. I don't want closest distance, but I want to get some sort of average measure of the pixel's distance from the boundary/contour in all directions. Any suggestions for computing this distance transform would be appreciated. If there any existing algorithms and/or efficient C++ code available to compute such distance transform, that would be wonderful too.
If you have a binary image of the contours, then you can calculate the number of boundary pixels around each pixel within some windows (using e.g. the integral image, or cv::blur). This would give you something like what you want.
You might be able to combine that with normalizing the distance transform for average distances.
If you want the "average measure of the pixel's distance from the boundary/contour in all directions", then I am afraid that you have to extract the contour and for each pixel inside the pattern, you have to compute the average distance with the pixels belonging to the contour.
An heuristic for a "rough" approximation, would be to compute many distance maps using sources points (they could be the pattern extremities), and for each pixel inside the pattern, then you compute the sum of all distances from the distance maps. But to have the exact measure, you would have to compute as many distance maps as pixels belonging to the contour. But if an approximation is "okay", then this will speed up the processing.
Related
For some topological map, there is a feature (such as a river). There is a corresponding file arranged in rows and columns, where each cell maps 1-1 with the corresponding pixel in the map and contains a value corresponding to the distance from the feature.
For the purposes of triangulation, what is the best way to place x, y points over this map, arranged in such a way that the points are closely packed where the distance is below some threshold, and packs further and further apart linearly with the distance up to some threshold distance?
Circle packing seems like the best option at this point, but I can't find compelling documentation on how this might be implemented for this use-case.
A decent example would be something like this, where circles are packed approximately according to intensity (and then points can be placed in the center of the circles):
A simple way is to place randomly sites then pick the greyscale value and feed it to a weighted triangulation with the distance function is the euklidian distance minus the weight. From the result pick the center of gravity from each triangle make it the new site and start again for x time.
Source:https://en.m.wikipedia.org/wiki/Stippling
I'm looking for an algorithm that converts a regular grid of heights (e.g. 1024x1024) to a triangular irregular network. Here is an image showing an example of a triangular irregular network:
I've looked in the internet for an algorithms to convert it, but I just couldn't find one. Basically the triangle density depends on the roughness and/or pixel error (when rasterized), or something like that.
Here's an idea for a two-step algorithm: Do a Delaunay triangulation based on a rough mesh first, then smoothe out the triangles recursively until a certain error criterion is met.
For the first step, identify a set of vertices for the Delaunay triangulation. These vertices coincide with pixel coordinates. Extreme points that are either higher or lower than all four neighbouring pixels should be in the set as should ridge points on the borders where the adjacent pixels along the border are either lower or higher. This should give a coarse triangular mesh. You can get a finer mesh by also including pixels that have a high curvature.
In the second step, iterate through all triangles. Scan through the triangle along the pixel grid and accumulate an error square for each pixel inside the triangle and also identify the points of maximum and minimum signed error. If the average error per pixel does not meet your criterion, add the points of lowest and highest error to your triangulation. Verify the new triangles and re-triangulate as necessary.
Notes:
The coarse triangulation in step one should be reasonably fast. If the height map is ragged, you might end up with too many vertices in the ragged area. In that case, the hight map might be smoothed with a Gaussian filter before applying the algorithm.
The recursive re-triangulation is probably not so fast, because determining the error requires scanning the triangles over and over. (The process should get faster as the triangle size decreases, but still.) A good criterion for finding vertices in step 1 might speed up step 2.
You can scan a triangle by finding the bounding box of pixels. Find the barycentric coordinates s, t of the lower left point of the bounding box and also the barycentric increments (dsx, dtx) and (dsy, dty) that correspond to a pixel move in the x and y directions. You can then scan the bounding box in two loops over the included pixels (x, y), calculate the barycentric coordinates (s, t) from your delta vectors and accumulate the error if you are inside the triangle, i.e. when s > 0, t > 0 and s + t < 1.
I haven't implemented this algorithm (yet - it is an interesting task), but I imagine that finding a good balance between speed and mesh quality is a matter of tailoring error criteria and vertex selection to the current height map.
I have a set of rectangles which I need to cluster together, based on euclidean distance between them.The situation is explained in the attached image. .
One possible approach is to take the center of each rectangle and cluster the center points using K means (distance function would be euclidean distance in XY plane). However, I would like to know if there is any other approach to this problem, which does not approximate a rectangle by it's central point, but also takes the actual shape of the rectangle into consideration.
Have a look at algorithms such as DBSCAN and OPTICS that can be used with arbitrary data types as long as you can define a distance between them (such as the minimum rectangle-to-rectangle distance).
K-means is probably not so good, as it is designed for point data with squared euclidean distance (= sum of squares, within-cluster-variance).
One way to formulate this problem is to look at each rectangle i, and each pair of rectangles (i,j) having distance d(i,j), and then forming a distance matrix from those. This distance measure d could be distance between rectangle centers or something more fancy like distance between closest points on rectangles.
Then, apply a clustering algorithm that takes a distance matrix as input, where you define your distance matrix D as the matrix where element (i,j) is d(i,j).
Related: Clustering with a distance matrix
Anony-Mousse's answer has some nice suggestions for algorithms you could use to cluster given the distance matrix.
We used Spectral Clustering with left_x, right_x, top_y, bottom_y coordinates as features with pretty good results.
I want to find percentage similarity between uncolored images. Specifically, I want to compare my own drawing with an image. Here's an example image:
I don't have any knowledge about image-processing. What algorithms can be used to achieve my goal? Any guidance would be appreciated.
If your images are both black and white, you could compute the Hausdorff distance. In simple words, each black pixel is a point. For each point of image A, you compute the closest point of image B. You get a list of distances. The Hausdorff distance, is the greatest value of this list. The smaller it is, the more similar your images are.
You will have to compute this for several relative position/angle/aspect ratio between your two images, in order to find the position that matches the best.
You can extend this method to any non B&W image by computing the edges.
I know how to implement n log n closest pair of points algorithm (Shamos and Hoey) for 2D cases (x and y). However for a problem where latitude and longitude are given this approach cannot be used. The distance between two points is calculated using the haversine formula.
I would like to know if there is some way to convert these latitudes and longitudes to their respective x and y coordinates and find the closest pair of points, or if there is another technique that can be used to do it.
I would translate them to three dimensional coordinates and then use the divide and conquer approach using a plane rather than a line. This will definitely work correctly. We can be assured of this because when only examining points on the sphere, the two closest points by arc distance (distance walking over the surface) will also be the two closest by 3-d Cartesian distance. This will have running time O(nlogn).
To translate to 3-d coordinates, the easiest way is to make (0,0,0) the center of the earth and then your coordinates are (cos(lat)*cos(lon),cos(lat)*sin(lan),sin(lat)). For those purposes I'm using a scale for which the radius of the Earth is 1 in order to simplify calculations. If you want distance in some other unit, just multiply all quantities by the radius of the Earth when measured in that unit.
I should note that all this assumes that the earth is a sphere. It's not exactly one and points may actually have altitude as well, so these answers won't really be completely exact, but they will be very close to correct in almost every case.