Measuring the average thickness of traces in an image - algorithm

Here's the problem: I have a number of binary images composed by traces of different thickness. Below there are two images to illustrate the problem:
First Image - size: 711 x 643 px
Second Image - size: 930 x 951 px
What I need is to measure the average thickness (in pixels) of the traces in the images. In fact, the average thickness of traces in an image is a somewhat subjective measure. So, what I need is a measure that have some correlation with the radius of the trace, as indicated in the figure below:
Notes
Since the measure doesn't need to be very precise, I am willing to trade precision for speed. In other words, speed is an important factor to the solution of this problem.
There might be intersections in the traces.
The trace thickness might not be constant, but an average measure is OK (even the maximum trace thickness is acceptable).
The trace will always be much longer than it is wide.

I'd suggest this algorithm:
Apply a distance transformation to the image, so that all background pixels are set to 0, all foreground pixels are set to the distance from the background
Find the local maxima in the distance transformed image. These are points in the middle of the lines. Put their pixel values (i.e. distances from the background) image into a list
Calculate the median or average of that list

I was impressed by #nikie's answer, and gave it a try ...
I simplified the algorithm for just getting the maximum value, not the mean, so evading the local maxima detection algorithm. I think this is enough if the stroke is well-behaved (although for self intersecting lines it may not be accurate).
The program in Mathematica is:
m = Import["http://imgur.com/3Zs7m.png"] (* Get image from web*)
s = Abs[ImageData[m] - 1]; (* Invert colors to detect background *)
k = DistanceTransform[Image[s]] (* White Pxs converted to distance to black*)
k // ImageAdjust (* Show the image *)
Max[ImageData[k]] (* Get the max stroke width *)
The generated result is
The numerical value (28.46 px X 2) fits pretty well my measurement of 56 px (Although your value is 100px :* )
Edit - Implemented the full algorithm
Well ... sort of ... instead of searching the local maxima, finding the fixed point of the distance transformation. Almost, but not quite completely unlike the same thing :)
m = Import["http://imgur.com/3Zs7m.png"]; (*Get image from web*)
s = Abs[ImageData[m] - 1]; (*Invert colors to detect background*)
k = DistanceTransform[Image[s]]; (*White Pxs converted to distance to black*)
Print["Distance to Background*"]
k // ImageAdjust (*Show the image*)
Print["Local Maxima"]
weights =
Binarize[FixedPoint[ImageAdjust#DistanceTransform[Image[#], .4] &,s]]
Print["Stroke Width =",
2 Mean[Select[Flatten[ImageData[k]] Flatten[ImageData[weights]], # != 0 &]]]
As you may see, the result is very similar to the previous one, obtained with the simplified algorithm.

From Here. A simple method!
3.1 Estimating Pen Width
The pen thickness may be readily estimated from the area A and perimeter length L of the foreground
T = A/(L/2)
In essence, we have reshaped the foreground into a rectangle and measured the length of the longest side. Stronger modelling of the pen, for instance, as a disc yielding circular ends, might allow greater precision, but rasterisation error would compromise the signicance.
While precision is not a major issue, we do need to consider bias and singularities.
We should therefore calculate area A and perimeter length L using functions which take into account "roundedness".
In MATLAB
A = bwarea(.)
L = bwarea(bwperim(.; 8))
Since I don't have MATLAB at hand, I made a small program in Mathematica:
m = Binarize[Import["http://imgur.com/3Zs7m.png"]] (* Get Image *)
k = Binarize[MorphologicalPerimeter[m]] (* Get Perimeter *)
p = N[2 Count[ImageData[m], Except[1], 2]/
Count[ImageData[k], Except[0], 2]] (* Calculate *)
The output is 36 Px ...
Perimeter image follows
HTH!

Its been a 3 years since the question was asked :)
following the procedure of #nikie, here is a matlab implementation of the stroke width.
clc;
clear;
close all;
I = imread('3Zs7m.png');
X = im2bw(I,0.8);
subplottight(2,2,1);
imshow(X);
Dist=bwdist(X);
subplottight(2,2,2);
imshow(Dist,[]);
RegionMax=imregionalmax(Dist);
[x, y] = find(RegionMax ~= 0);
subplottight(2,2,3);
imshow(RegionMax);
List(1:size(x))=0;
for i = 1:size(x)
List(i)=Dist(x(i),y(i));
end
fprintf('Stroke Width = %u \n',mean(List));

Assuming that the trace has constant thickness, is much longer than it is wide, is not too strongly curved and has no intersections / crossings, I suggest an edge detection algorithm which also determines the direction of the edge, then a rise/fall detector with some trigonometry and a minimization algorithm. This gives you the minimal thickness across a relatively straight part of the curve.
I guess the error to be up to 25%.
First use an edge detector that gives us the information where an edge is and which direction (in 45° or PI/4 steps) it has. This is done by filtering with 4 different 3x3 matrices (Example).
Usually I'd say it's enough to scan the image horizontally, though you could also scan vertically or diagonally.
Assuming line-by-line (horizontal) scanning, once we find an edge, we check if it's a rise (going from background to trace color) or a fall (to background). If the edge's direction is at a right angle to the direction of scanning, skip it.
If you found one rise and one fall with the correct directions and without any disturbance in between, measure the distance from the rise to the fall. If the direction is diagonal, multiply by squareroot of 2. Store this measure together with the coordinate data.
The algorithm must then search along an edge (can't find a web resource on that right now) for neighboring (by their coordinates) measurements. If there is a local minimum with a padding of maybe 4 to 5 size units to each side (a value to play with - larger: less information, smaller: more noise), this measure qualifies as a candidate. This is to ensure that the ends of the trail or a section bent too much are not taken into account.
The minimum of that would be the measurement. Plausibility check: If the trace is not too tangled, there should be a lot of values in that area.
Please comment if there are more questions. :-)

Here is an answer that works in any computer language without the need of special functions...
Basic idea: Try to fit a circle into the black areas of the image. If you can, try with a bigger circle.
Algorithm:
set image background = 0 and trace = 1
initialize array result[]
set minimalExpectedWidth
set w = minimalExpectedWidth
loop
set counter = 0
create a matrix of zeros size w x w
within a circle of diameter w in that matrix, put ones
calculate area of the circle (= PI * w)
loop through all pixels of the image
optimization: if current pixel is of background color -> continue loop
multiply the matrix with the image at each pixel (e.g. filtering the image with that matrix)
(you can do this using the current x and y position and a double for loop from 0 to w)
take the sum of the result of each multiplication
if the sum equals the calculated circle's area, increment counter by one
store in result[w - minimalExpectedWidth]
increment w by one
optimization: include algorithm from further down here
while counter is greater zero
Now the result array contains the number of matches for each tested width.
Graph it to have a look at it.
For a width of one this will be equal to the number of pixels of trace color. For a greater width value less circle areas will fit into the trace. The result array will thus steadily decrease until there is a sudden drop. This is because the filter matrix with the circular area of that width now only fits into intersections.
Right before the drop is the width of your trace. If the width is not constant, the drop will not be that sudden.
I don't have MATLAB here for testing and don't know for sure about a function to detect this sudden drop, but we do know that the decrease is continuous, so I'd take the maximum of the second derivative of the (zero-based) result array like this
Algorithm:
set maximum = 0
set widthFound = 0
set minimalExpectedWidth as above
set prevvalue = result[0]
set index = 1
set prevFirstDerivative = result[1] - prevvalue
loop until index is greater result length
firstDerivative = result[index] - prevvalue
set secondDerivative = firstDerivative - prevFirstDerivative
if secondDerivative > maximum or secondDerivative < maximum * -1
maximum = secondDerivative
widthFound = index + minimalExpectedWidth
prevFirstDerivative = firstDerivative
prevvalue = result[index]
increment index by one
return widthFound
Now widthFound is the trace width for which (in relation to width + 1) many more matches were found.
I know that this is in part covered in some of the other answers, but my description is pretty much straightforward and you don't have to have learned image processing to do it.

I have interesting solution:
Do edge detection, for edge pixels extraction.
Do physical simulation - consider edge pixels as positively charged particles.
Now put some number of free positively charged particles in the stroke area.
Calculate electrical force equations for determining movement of these free particles.
Simulate particles movement for some time until particles reach position equilibrium.
(As they will repel from both stoke edges after some time they will stay in the middle line of stoke)
Now stroke thickness/2 would be average distance from edge particle to nearest free particle.

Related

Cover a polygonal line using the least given rectangles while keeping her continuity

Given a list of points forming a polygonal line, and both height and width of a rectangle, how can I find the number and positions of all rectangles needed to cover all the points?
The rectangles should be rotated and may overlap, but must follow the path of the polyline (A rectangle may contain multiple segments of the line, but each rectangle must contain a segment that is contiguous with the previous one.)
Do the intersections on the smallest side of the rectangle, when it is possible, would be much appreciated.
All the solutions I found so far were not clean, here is the result I get:
You should see that it gives a good render in near-flat cases, but overlaps too much in big curbs. One rectangle could clearly be removed if the previous were offset.
Actually, I put a rectangle centered at width/2 along the line and rotate it using convex hull and modified rotating calipers algorithms, and reiterate starting at the intersection point of the previous rectangle and the line.
You may observe that I took inspiration from the minimum oriented rectangle bounding box algorithm, for the orientation, but it doesn't include the cutting aspect, nor the fixed size.
Thanks for your help!
I modified k-means to solve this. It's not fast, it's not optimal, it's not guaranteed, but (IMHO) it's a good start.
There are two important modifications:
1- The distance measure
I used a Chebyshev-distance-inspired measure to see how far points are from each rectangle. To find distance from points to each rectangle, first I transformed all points to a new coordinate system, shifted to center of rectangle and rotated to its direction:
Then I used these transformed points to calculate distance:
d = max(2*abs(X)/w, 2*abs(Y)/h);
It will give equal values for all points that have same distance from each side of rectangle. The result will be less than 1.0 for points that lie inside rectangle. Now we can classify points to their closest rectangle.
2- Strategy for updating cluster centers
Each cluster center is a combination of C, center of rectangle, and a, its rotation angle. At each iteration, new set of points are assigned to a cluster. Here we have to find C and a so that rectangle covers maximum possible number of points. I don’t now if there is an analytical solution for that, but I used a statistical approach. I updated the C using weighted average of points, and used direction of first principal component of points to update a. I used results of proposed distance, powered by 500, as weight of each point in weighted average. It moves rectangle towards points that are located outside of it.
How to Find K
Initiate it with 1 and increase it till all distances from points to their corresponding rectangles become less than 1.0, meaning all points are inside a rectangle.
The results
Iterations 0, 10, 20, 30, 40, and 50 of updating cluster centers (rectangles):
Solution for test case 1:
Trying Ks: 2, 4, 6, 8, 10, and 12 for complete coverage:
Solution for test case 2:
P.M: I used parts of Chalous Road as data. It was fun downloading it from Google Maps. The I used technique described here to sample a set of equally spaced points.
It’s a little late and you’ve probably figured this out. But, I was free today and worked on the constraint reflected in your last edit (continuity of segments). As I said before in the comments, I suggest using a greedy algorithm. It’s composed of two parts:
A search algorithm that looks for furthermost point from an initial point (I used binary search algorithm), so that all points between them lie inside a rectangle of given w and h.
A repeated loop that finds best rectangle at each step and advances the initial point.
The pseudo code of them are like these respectively:
function getBestMBR( P, iFirst, w, h )
nP = length(P);
iStart = iFirst;
iEnd = nP;
while iStart <= iEnd
m = floor((iStart + iEnd) / 2);
MBR = getMBR(P[iFirst->m]);
if (MBR.w < w) & (MBR.h < h) {*}
iStart = m + 1;
iLast = m;
bestMBR = MBR;
else
iEnd = m - 1;
end
end
return bestMBR, iLast;
end
function getRectList( P, w, h )
nP = length(P);
rects = [];
iFirst = 1;
iLast = iFirst;
while iLast < nP
[bestMBR, iLast] = getBestMBR(P, iFirst, w, h);
rects.add(bestMBR.x, bestMBR.y, bestMBR.a];
iFirst = iLast;
end
return rects;
Solution for test case 1:
Solution for test case 2:
Just keep in mind that it’s not meant to find the optimal solution, but finds a sub-optimal one in a reasonable time. It’s greedy after all.
Another point is that you can improve this a little in order to decrease number of rectangles. As you can see in the line marked with (*), I kept resulting rectangle in direction of MBR (Minimum Bounding Rectangle), even though you can cover larger MBRs with rectangles of same w and h if you rotate the rectangle. (1) (2)

In a restricted space with n dimension, how to find the coordinates of p points, so that they are as far as possible from each other?

For example, in a 2D space, with x [0 ; 1] and y [0 ; 1]. For p = 4, intuitively, I will place each point at each corner of the square.
But what can be the general algorithm?
Edit: The algorithm needs modification if dimensions are not orthogonal to eachother
To uniformly place the points as described in your example you could do something like this:
var combinedSize = 0
for each dimension d in d0..dn {
combinedSize += d.length;
}
val listOfDistancesBetweenPointsAlongEachDimension = new List
for each d dimension d0..dn {
val percentageOfWholeDimensionSize = d.length/combinedSize
val pointsToPlaceAlongThisDimension = percentageOfWholeDimensionSize * numberOfPoints
listOfDistancesBetweenPointsAlongEachDimension[d.index] = d.length/(pointsToPlaceAlongThisDimension - 1)
}
Run on your example it gives:
combinedSize = 2
percentageOfWholeDimensionSize = 1 / 2
pointsToPlaceAlongThisDimension = 0.5 * 4
listOfDistancesBetweenPointsAlongEachDimension[0] = 1 / (2 - 1)
listOfDistancesBetweenPointsAlongEachDimension[1] = 1 / (2 - 1)
note: The minus 1 deals with the inclusive interval, allowing points at both endpoints of the dimension
2D case
In 2D (n=2) the solution is to place your p points evenly on some circle. If you want also to define the distance d between points then the circle should have radius around:
2*Pi*r = ~p*d
r = ~(p*d)/(2*Pi)
To be more precise you should use circumference of regular p-point polygon instead of circle circumference (I am too lazy to do that). Or you can compute the distance of produced points and scale up/down as needed instead.
So each point p(i) can be defined as:
p(i).x = r*cos((i*2.0*Pi)/p)
p(i).y = r*sin((i*2.0*Pi)/p)
3D case
Just use sphere instead of circle.
ND case
Use ND hypersphere instead of circle.
So your question boils down to place p "equidistant" points to a n-D hypersphere (either surface or volume). As you can see 2D case is simple, but in 3D this starts to be a problem. See:
Make a sphere with equidistant vertices
sphere subdivision triangulation
As you can see there are quite a few approaches to do this (there are much more of them even using Fibonacci sequence generated spiral) which are more or less hard to grasp or implement.
However If you want to generalize this into ND space you need to chose general approach. I would try to do something like this:
Place p uniformly distributed place inside bounding hypersphere
each point should have position,velocity and acceleration vectors. You can also place the points randomly (just ensure none are at the same position)...
For each p compute acceleration
each p should retract any other point (opposite of gravity).
update position
just do a Newton D'Alembert physics simulation in ND. Do not forget to include some dampening of speed so the simulation will stop in time. Bound the position and speed to the sphere so points will not cross it's border nor they would reflect the speed inwards.
loop #2 until max speed of any p crosses some threshold
This will more or less accurately place p points on the circumference of ND hypersphere. So you got minimal distance d between them. If you got some special dependency between n and p then there might be better configurations then this but for arbitrary numbers I think this approach should be safe enough.
Now by modifying #2 rules you can achieve 2 different outcomes. One filling hypersphere surface (by placing massive negative mass into center of surface) and second filling its volume. For these two options also the radius will be different. For one you need to use surface and for the other volume...
Here example of similar simulation used to solve a geometry problem:
How to implement a constraint solver for 2-D geometry?
Here preview of 3D surface case:
The number on top is the max abs speed of particles used to determine the simulations stopped and the white-ish lines are speed vectors. You need to carefully select the acceleration and dampening coefficients so the simulation is fast ...

What is the best way to check all pixels within certain radius?

I'm currently developing an application that will alert users of incoming rain. To do this I want to check certain area around user location for rainfall (different pixel colours for intensity on rainfall radar image). I would like the checked area to be a circle but I don't know how to do this efficiently.
Let's say I want to check radius of 50km. My current idea is to take subset of image with size 100kmx100km (user+50km west, user+50km east, user+50km north, user+50km south) and then check for each pixel in this subset if it's closer to user than 50km.
My question here is, is there a better solution that is used for this type of problems?
If the occurrence of the event you are searching for (rain or anything) is relatively rare, then there's nothing wrong with scanning a square or pixels and then, only after detecting rain in that square, checking whether that rain is within the desired 50km circle. Note that the key point here is that you don't need to check each pixel of the square for being inside the circle (that would be very inefficient), you have to search for your event (rain) first and only when you found it, check whether it falls into the 50km circle. To implement this efficiently you also have to develop some smart strategy for handling multi-pixel "stains" of rain on your image.
However, since you are scanning a raster image, you can easily implement the well-known Bresenham circle algorithm to find the starting and the ending point of the circle for each scan line. That way you can easily limit your scan to the desired 50km radius.
On the second thought, you don't even need the Bresenham algorithm for that. For each row of pixels in your square, calculate the points of intersection of that row with the 50km circle (using the usual schoolbook formula with square root), and then check all pixels that fall between these intersection points. Process all rows in the same fashion and you are done.
P.S. Unfortunately, the Wikipedia page I linked does not present Bresenham algorithm at all. It has code for Michener circle algorithm instead. Michener algorithm will also work for circle rasterization purposes, but it is less precise than Bresenham algorithm. If you care for precision, find a true Bresenham on somewhere. It is actually surprisingly diffcult to find on the net: most search hits erroneously present Michener as Bresenham.
There is, you can modify the midpoint circle algorithm to give you an array of for each y, the x coordinate where the circle starts (and ends, that's the same thing because of symmetry). This array is easy to compute, pseudocode below.
Then you can just iterate over exactly the right part, without checking anything.
Pseudo code:
data = new int[radius];
int f = 1 - radius, ddF_x = 1;
int ddF_y = -2 * radius;
int x = 0, y = radius;
while (x < y)
{
if (f >= 0)
{
y--;
ddF_y += 2; f += ddF_y;
}
x++;
ddF_x += 2; f += ddF_x;
data[radius - y] = x; data[radius - x] = y;
}
Maybe you can try something that will speed up your algorithm.
In brute force algorithm you will probably use equation:
(x-p)^2 + (y-q)^2 < r^2
(p,q) - center of the circle, user position
r - radius (50km)
If you want to find all pixels (x,y) that satisfy above condition and check them, your algorithm goes to O(n^2)
Instead of scanning all pixels in this circle I will check only only pixels that are on border of the circle.
In that case, you can use some more clever way to define circle.
x = p+r*cos(a)
y = q*r*sin(a)
a - angle measured in radians [0-2pi]
Now you can sample some angles, for example twenty of them, iterate and find all pairs (x,y) that are border for radius 50km. Now check are they on the rain zone and alert user.
For more safety I recommend you to use multiple radians (smaller than 50km), because your whole rain cloud can be inside circle, and your app will not recognize him. For example use 3 incircles (r = 5km, 15km, 30km) and do same thing. Efficiency of this algorithm only depends on number of angles and number of incircles.
Pseudocode will be:
checkRainDanger()
p,q <- position
radius[] <- array of radii
for c = 1 to length(radius)
a=0
while(a<2*pi)
x = p + radius[c]*cos(a)
y = q + radius[c]*sin(a)
if rainZone(x,y)
return true
else
a+=pi/10
end_while
end_for
return false //no danger
r2=r*r
for x in range(-r, +r):
max_y=sqrt(r2-x*x)
for y in range(-max_y, +max_y):
# x,y is in range - check for rain

How to modify d3js fisheye distortion so that it will support radius

I am trying to modify fisheye this project so that I can use radius function to increase fisheye size. My aim is to see more cells bigger around mouse. Current implementation does not support radius function. If I use circular instead of scale, I can use radius function. But in this case, I dont know how to use circular.
Either way, help is appreciated :)
Thanks!
The radius parameter on the circular fisheye puts a boundary to the magnification effects. In contrast, in the scale/Cartesian fisheye, the entire graph is modified. The focus cell is enlarged, and other cells are compressed according to how far away they are from the focus. There is no boundary, the compression continues smoothly (getting progressively more compressed) until the edge of the plot. See http://bost.ocks.org/mike/fisheye/#cartesian
If what you want is that cells near to the focus aren't compressed as much (so you can still compare adjacent cells effectively), then the parameter to change is the distortion parameter. Lower distortion will reduce the amount by which the focus cell is magnified, and therefore leave more room for adjacent cells. The default distortion parameter is 3, you're using higher values, which increases the magnification of the focus cell at the expense of all the others.
If changing the distortion doesn't satisfy you, try changing the scale type by using d3.fisheye.scale(d3.scale.sqrt); this will change the function determining how the image magnification changes as you move away from the focus point. (I couldn't get other scale types to work -- log gives an error, and with power scales there is no way to set the exponent.)
Edit
After additional playing around, I'm not satisfied with the results from changing the input scale type. I misunderstood how that would affect it: it doesn't change the scale function for the distortion, but for the raw data, so that changes are different for points above the focus compared to point below the focus. The scale type you give as a parameter to the fisheye scale should be the underlying scale type that makes sense for the data, and is distinct from the fisheye effects.
Instead, I've tried some custom code to add an exponent into the calculation. To understand how it works, you need to first break down the original function:
The original code for the fisheye scale is:
function fisheye(_) {
var x = scale(_),
left = x < a,
range = d3.extent(scale.range()),
min = range[0],
max = range[1],
m = left ? a - min : max - a;
if (m == 0) m = max - min;
return a + (left ? -1 : 1) * m * (d + 1) / (d + (m / Math.abs(x - a)));
}
The _ is the input value, scale is usually a linear scale for which domain and range have been set, a is the focus point in the output range, and d is the distortion parameter.
In other words: to determine the point at which a value is drawn on the distorted scale:
calculate the range position of the value based on the default/undistorted scale;
calculate it's distance from the focal point ({distance}, Math.abs(x-a));
calculate the distance between edge of the graph and the focal point ({total distance}, m);
the returned value is offset from the focal point by {total distance} multiplied by
(d + 1) / (d + ({total distance} / {distance}) );
adjust as appropriate depending on whether the value is below or above the focal point.
For an input point that is half-way between the focal point and the edge of the graph on the undistorted scale, the inner fraction {total distance}/{distance} will equal 2. The outer fraction will therefore be (d+1)/(d+2). If d is 0 (no distortion), this will equal 1/2, and the output point will also be half-way between the focal point and the edge of the graph. As the distortion parameter, d, increases, that fraction also increases: at d=1, the output point would be 2/3 of the way from the focal point to the edge of the graph; at d=2, it would be 3/4 of the way to the edge of the graph; and so on.
In contrast, when the input value is very close to the focal point, {distance} is nearly 0, so the inner fraction approaches infinity and the outer fraction approaches 0, so the returned point will be very close to the focal point.
Finally, when the input value is very close to the edge of the graph, {distance} is nearly {total distance}, and both the inner and outer fractions will be nearly 1, so the returned point will also be very close to the edge of the graph.
Those last two identities we want to keep. We just want to change the relationship in between -- how the offset from focal point changes as the input point gets farther away from the focal point and closer to the edge of the graph. Changing the distortion parameter changes the amount of distortion in both near and far values equally. If you reduce the distortion parameter you also reduce the overall magnification, since all the data still has to fit in the same space.
The OP wanted to reduce the rate of change in magnification between cells near the focal point. Reducing the distortion parameter does this, but only by reducing the magnification overall. The ideal approach would be to change the relationship between distance from the focal point and degree of distortion.
My changed code for the same function is:
function fisheye(_) {
var x = scale(_),
left = x < a,
range = d3.extent(scale.range()),
min = range[0],
max = range[1],
m = left ? a - min : max - a,
dp = Math.pow(d, p);
if (m == 0) return left? min : max;
return a + (left ? -1 : 1) * m *
Math.pow(
(dp + 1)
/ (dp + (m / Math.abs(x-a) ) )
, p);
}
I've changed two things: I raise the fraction (d + 1)/(d + {total distance}/{distance}) to a power, and I also replace the original d value with it raised to the same exponent (dp). The first change is what changes the relationship, the second is just an adjustment so that a given distortion parameter will have approximately the same overall magnification effect regardless of the power parameter.
The fraction raised to the power will still be close to zero if the fraction is close to zero, and will still be close to one if the fraction is close to one, so the basic identities remain the same. However, when the power parameter is less than one, the rate of change will be shallower at the edges, and steeper in the middle. For a power parameter greater than 1, the rate of change will be quite steep at the edges and shallower near the focal point.
Example here: http://codepen.io/AmeliaBR/pen/zHqac
The horizontal fisheye scale has a square-root power function (p = 0.5), while the vertical has a square function (p = 2). Both have the same unadjusted distortion parameter (d = 6).
The effect of the square root function is that even the farthest columns still have some visible width, but the change in column width near the focal point is significant. The effect of the power of 2 function is that the rows far away from the focal point are compressed to nearly invisible height, but the rows above and below the focus are still of significant size. I think this latter version achieves what #piedpiper was hoping for.
I've of course also added a .power function to the fisheye scale in order to set the p parameter, and have set the default value for p to 1, which will give the same results as the original fisheye scale. I use the name power for the method to distinguish from the exponent method of power scales, which would be used if they underlying scale (before distortion) had a power relationship.

How to filter a set of 2D points moving in a certain way

I have a list of points moving in two dimensions (x- and y-axis) represented as rows in an array. I might have N points - i.e., N rows:
1 t1 x1 y1
2 t2 x2 y2
.
.
.
N tN xN yN
where ti, xi, and yi, is the time-index, x-coordinate, and the y-coordinate for point i. The time index-index ti is an integer from 1 to T. The number of points at each such possible time index can vary from 0 to N (still with only N points in total).
My goal is the filter out all the points that do not move in a certain way; or to keep only those that do. A point must move in a parabolic trajectory - with decreasing x- and y-coordinate (i.e., moving to the left and downwards only). Points with other dynamic behaviour must be removed.
Can I use a simple sorting mechanism on this array - and then analyse the order of the time-index? I have also considered the fact each point having the same time-index ti are physically distinct points, and so should be paired up with other points. The complexity of the problem grew - and now I turn to you.
NOTE: You can assume that the points are confined to a sub-region of the (x,y)-plane between two parabolic curves. These curves intersect only at only at one point: A point close to the origin of motion for any point.
More Information:
I have made some datafiles available:
MATLAB datafile (1.17 kB)
same data as CSV with semicolon as column separator (2.77 kB)
Necessary context:
The datafile hold one uint32 array with 176 rows and 5 columns. The columns are:
pixel x-coordinate in 175-by-175 lattice
pixel y-coordinate in 175-by-175 lattice
discrete theta angle-index
time index (from 1 to T = 10)
row index for this original sorting
The points "live" in a 175-by-175 pixel-lattice - and again inside the upper quadrant of a circle with radius 175. The points travel on the circle circumference in a counterclockwise rotation to a certain angle theta with horizontal, where they are thrown off into something close to a parabolic orbit. Column 3 holds a discrete index into a list with indices 1 to 45 from 0 to 90 degress (one index thus spans 2 degrees). The theta-angle was originally deduces solely from the points by setting up the trivial equations of motions and solving for the angle. This gives rise to a quasi-symmetric quartic which can be solved in close-form. The actual metric radius of the circle is 0.2 m and the pixel coordinate were converted from pixel-coordinate to metric using simple linear interpolation (but what we see here are the points in original pixel-space).
My problem is that some points are not behaving properly and since I need to statistics on the theta angle, I need to remove the points that certainly do NOT move in a parabolic trajoctory. These error are expected and fully natural, but still need to be filtered out.
MATLAB plot code:
% load data and setup variables:
load mat_points.mat;
num_r = 175;
num_T = 10;
num_gridN = 20;
% begin plotting:
figure(1000);
clf;
plot( ...
num_r * cos(0:0.1:pi/2), ...
num_r * sin(0:0.1:pi/2), ...
'Color', 'k', ...
'LineWidth', 2 ...
);
axis equal;
xlim([0 num_r]);
ylim([0 num_r]);
hold all;
% setup grid (yea... went crazy with one):
vec_tickValues = linspace(0, num_r, num_gridN);
cell_tickLabels = repmat({''}, size(vec_tickValues));
cell_tickLabels{1} = sprintf('%u', vec_tickValues(1));
cell_tickLabels{end} = sprintf('%u', vec_tickValues(end));
set(gca, 'XTick', vec_tickValues);
set(gca, 'XTickLabel', cell_tickLabels);
set(gca, 'YTick', vec_tickValues);
set(gca, 'YTickLabel', cell_tickLabels);
set(gca, 'GridLineStyle', '-');
grid on;
% plot points per timeindex (with increasing brightness):
vec_grayIndex = linspace(0,0.9,num_T);
for num_kt = 1:num_T
vec_xCoords = mat_points((mat_points(:,4) == num_kt), 1);
vec_yCoords = mat_points((mat_points(:,4) == num_kt), 2);
plot(vec_xCoords, vec_yCoords, 'o', ...
'MarkerEdgeColor', 'k', ...
'MarkerFaceColor', vec_grayIndex(num_kt) * ones(1,3) ...
);
end
Thanks :)
Why, it looks almost as if you're simulating a radar tracking debris from the collision of two missiles...
Anyway, let's coin a new term: object. Objects are moving along parabolae and at certain times they may emit flashes that appear as points. There are also other points which we are trying to filter out.
We will need some more information:
Can we assume that the objects obey the physics of things falling under gravity?
Must every object emit a point at every timestep during its lifetime?
Speaking of lifetime, do all objects begin at the same time? Can some expire before others?
How precise is the data? Is it exact? Is there a measure of error? To put it another way, do we understand how poorly the points from an object might fit a perfect parabola?
Sort the data with (index,time) as keys and for all locations of a point i see if they follow parabolic trajectory?
Which part are you facing problem? Sorting should be very easy. IMHO, it is the second part (testing if a set of points follow parabolic trajectory) that is difficult.

Resources