Related
I have three sections (top, mid, bot) of grayscale images (3D). In each section, I have a point with coordinates (x,y) and intensity values [0-255]. The distance between each section is 20 pixels.
I created an illustration to show how those images were generated using a microscope:
Illustration
Illustration (side view): red line is the object of interest. Blue stars represents the dots which are visible in top, mid, bot section. The (x,y) coordinates of these dots are known. The length of the object remains the same but it can rotate in space - 'out of focus' (illustration shows a rotating line at time point 5). At time point 1, the red line is resting (in 2D image: 2 dots with a distance equal to the length of the object).
I want to estimate the x,y,z-coordinate of the end points (represents as stars) by using the changes in intensity, the knowledge about the length of the object and the information in the sections I have. Any help would be appreciated.
Here is an example of images:
Bot section
Mid section
Top section
My 3D PSF data:
https://drive.google.com/file/d/1qoyhWtLDD2fUy2zThYUgkYM3vMXxNh64/view?usp=sharing
Attempt so far:
enter image description here
I guess the correct approach would be to record three images with slightly different z-coordinates for your bot and your top frame, then do a 3D-deconvolution (using Richardson-Lucy or whatever algorithm).
However, a more simple approach would be as I have outlined in my comment. If you use the data for a publication, I strongly recommend to emphasize that this is just an estimation and to include the steps how you have done it.
I'd suggest the following procedure:
Since I do not have your PSF-data, I fake some by estimating the PSF as a 3D-Gaussiamn. Of course, this is a strong simplification, but you should be able to get the idea behind it.
First, fit a Gaussian to the PSF along z:
[xg, yg, zg] = meshgrid(-32:32, -32:32, -32:32);
rg = sqrt(xg.^2+yg.^2);
psf = exp(-(rg/8).^2) .* exp(-(zg/16).^2);
% add some noise to make it a bit more realistic
psf = psf + randn(size(psf)) * 0.05;
% view psf:
%
subplot(1,3,1);
s = slice(xg,yg,zg, psf, 0,0,[]);
title('faked PSF');
for i=1:2
s(i).EdgeColor = 'none';
end
% data along z through PSF's center
z = reshape(psf(33,33,:),[65,1]);
subplot(1,3,2);
plot(-32:32, z);
title('PSF along z');
% Fit the data
% Generate a function for a gaussian distibution plus some background
gauss_d = #(x0, sigma, bg, x)exp(-1*((x-x0)/(sigma)).^2)+bg;
ft = fit ((-32:32)', z, gauss_d, ...
'Start', [0 16 0] ... % You may find proper start points by looking at your data
);
subplot(1,3,3);
plot(-32:32, z, '.');
hold on;
plot(-32:.1:32, feval(ft, -32:.1:32), 'r-');
title('fit to z-profile');
The function that relates the intensity I to the z-coordinate is
gauss_d = #(x0, sigma, bg, x)exp(-1*((x-x0)/(sigma)).^2)+bg;
You can re-arrange this formula for x. Due to the square root, there are two possibilities:
% now make a function that returns the z-coordinate from the intensity
% value:
zfromI = #(I)ft.sigma * sqrt(-1*log(I-ft.bg))+ft.x0;
zfromI2= #(I)ft.sigma * -sqrt(-1*log(I-ft.bg))+ft.x0;
Note that the PSF I have faked is normalized to have one as its maximum value. If your PSF data is not normalized, you can divide the data by its maximum.
Now, you can use zfromI or zfromI2 to get the z-coordinate for your intensity. Again, I should be normalized, that is the fraction of the intensity to the intensity of your reference spot:
zfromI(.7)
ans =
9.5469
>> zfromI2(.7)
ans =
-9.4644
Note that due to the random noise I have added, your results might look slightly different.
I have a series of discrete point on a plane, However, their order is scattered. Here is an instance:
To connect them with a smooth curve, I wrote a findSmoothBoundary() to achieve the smooth boundary.
Code
function findSmoothBoundary(boundaryPointSet)
%initialize the current point
currentP = boundaryPointSet(1,:);
%Create a space smoothPointsSet to store the point
smoothPointsSet = NaN*ones(length(boundaryPointSet),2);
%delete the current point from the boundaryPointSet
boundaryPointSet(1,:) = [];
ptsNum = 1; %record the number of smoothPointsSet
smoothPointsSet(ptsNum,:) = currentP;
while ~isempty(boundaryPointSet)
%ultilize the built-in knnsearch() to
%achieve the nearest point of current point
nearestPidx = knnsearch(boundaryPointSet,currentP);
currentP = boundaryPointSet(nearestPidx,:);
ptsNum = ptsNum + 1;
smoothPointsSet(ptsNum,:) = currentP;
%delete the nearest point from boundaryPointSet
boundaryPointSet(nearestPidx,:) = [];
end
%visualize the smooth boundary
plot(smoothPointsSet(:,1),smoothPointsSet(:,2))
axis equal
end
Although findSmoothBoundary() can find the smooth boundary rightly, but its efficiency is much lower ( About the data, please see here)
So I would like to know:
How to find the discrete point order effieciently?
Data
theta = linspace(0,2*pi,1000)';
boundaryPointSet= [2*sin(theta),cos(theta)];
tic;
findSmoothBoundary(boundaryPointSet)
toc;
%Elapsed time is 4.570719 seconds.
This answer is not perfect because I'll have to make a few hypothesis in order for it to work. However, for a vast majority of cases, it should works as intended. Moreover, from the link you gave in the comments, I think these hypothesis are at least weak, if not verified by definition :
1. The point form a single connected region
2. The center of mass of your points lies in the convex hull of those points
If these hypothesis are respected, you can do the following (Full code available at the end):
Step 1 : Calculate the center of mass of your points
Means=mean(boundaryPointSet);
Step 2 : Change variables to set the origin to the center of mass
boundaryPointSet(:,1)=boundaryPointSet(:,1)-Means(1);
boundaryPointSet(:,2)=boundaryPointSet(:,2)-Means(2);
Step3 : Convert coordinates to polar
[Angles,Radius]=cart2pol(boundaryPointSet(:,1),boundaryPointSet(:,2));
Step4 : Sort the Angle and use this sorting to sort the Radius
[newAngles,ids]=sort(Angles);
newRadius=Radius(ids);
Step5 : Go back to cartesian coordinates and re-add the coordinates of the center of mass:
[X,Y]=pol2cart(newAngles,newRadius);
X=X+Means(1);
Y=Y+means(2);
Full Code
%%% Find smooth boundary
fid=fopen('SmoothBoundary.txt');
A=textscan(fid,'%f %f','delimiter',',');
boundaryPointSet=cell2mat(A);
boundaryPointSet(any(isnan(boundaryPointSet),2),:)=[];
idx=randperm(size(boundaryPointSet,1));
boundaryPointSet=boundaryPointSet(idx,:);
tic
plot(boundaryPointSet(:,1),boundaryPointSet(:,2))
%% Find mean value of all parameters
Means=mean(boundaryPointSet);
%% Center values around Mean point
boundaryPointSet(:,1)=boundaryPointSet(:,1)-Means(1);
boundaryPointSet(:,2)=boundaryPointSet(:,2)-Means(2);
%% Get polar coordinates of your points
[Angles,Radius]=cart2pol(boundaryPointSet(:,1),boundaryPointSet(:,2));
[newAngles,ids]=sort(Angles);
newRadius=Radius(ids);
[X,Y]=pol2cart(newAngles,newRadius);
X=X+Means(1);
Y=Y+means(2);
toc
figure
plot(X,Y);
Note : As your values are already sorted in your input file, I had to mess it up a bit by permutating them
Outputs :
Boundary
Elapsed time is 0.131808 seconds.
Messed Input :
Output :
I have a list of points moving in two dimensions (x- and y-axis) represented as rows in an array. I might have N points - i.e., N rows:
1 t1 x1 y1
2 t2 x2 y2
.
.
.
N tN xN yN
where ti, xi, and yi, is the time-index, x-coordinate, and the y-coordinate for point i. The time index-index ti is an integer from 1 to T. The number of points at each such possible time index can vary from 0 to N (still with only N points in total).
My goal is the filter out all the points that do not move in a certain way; or to keep only those that do. A point must move in a parabolic trajectory - with decreasing x- and y-coordinate (i.e., moving to the left and downwards only). Points with other dynamic behaviour must be removed.
Can I use a simple sorting mechanism on this array - and then analyse the order of the time-index? I have also considered the fact each point having the same time-index ti are physically distinct points, and so should be paired up with other points. The complexity of the problem grew - and now I turn to you.
NOTE: You can assume that the points are confined to a sub-region of the (x,y)-plane between two parabolic curves. These curves intersect only at only at one point: A point close to the origin of motion for any point.
More Information:
I have made some datafiles available:
MATLAB datafile (1.17 kB)
same data as CSV with semicolon as column separator (2.77 kB)
Necessary context:
The datafile hold one uint32 array with 176 rows and 5 columns. The columns are:
pixel x-coordinate in 175-by-175 lattice
pixel y-coordinate in 175-by-175 lattice
discrete theta angle-index
time index (from 1 to T = 10)
row index for this original sorting
The points "live" in a 175-by-175 pixel-lattice - and again inside the upper quadrant of a circle with radius 175. The points travel on the circle circumference in a counterclockwise rotation to a certain angle theta with horizontal, where they are thrown off into something close to a parabolic orbit. Column 3 holds a discrete index into a list with indices 1 to 45 from 0 to 90 degress (one index thus spans 2 degrees). The theta-angle was originally deduces solely from the points by setting up the trivial equations of motions and solving for the angle. This gives rise to a quasi-symmetric quartic which can be solved in close-form. The actual metric radius of the circle is 0.2 m and the pixel coordinate were converted from pixel-coordinate to metric using simple linear interpolation (but what we see here are the points in original pixel-space).
My problem is that some points are not behaving properly and since I need to statistics on the theta angle, I need to remove the points that certainly do NOT move in a parabolic trajoctory. These error are expected and fully natural, but still need to be filtered out.
MATLAB plot code:
% load data and setup variables:
load mat_points.mat;
num_r = 175;
num_T = 10;
num_gridN = 20;
% begin plotting:
figure(1000);
clf;
plot( ...
num_r * cos(0:0.1:pi/2), ...
num_r * sin(0:0.1:pi/2), ...
'Color', 'k', ...
'LineWidth', 2 ...
);
axis equal;
xlim([0 num_r]);
ylim([0 num_r]);
hold all;
% setup grid (yea... went crazy with one):
vec_tickValues = linspace(0, num_r, num_gridN);
cell_tickLabels = repmat({''}, size(vec_tickValues));
cell_tickLabels{1} = sprintf('%u', vec_tickValues(1));
cell_tickLabels{end} = sprintf('%u', vec_tickValues(end));
set(gca, 'XTick', vec_tickValues);
set(gca, 'XTickLabel', cell_tickLabels);
set(gca, 'YTick', vec_tickValues);
set(gca, 'YTickLabel', cell_tickLabels);
set(gca, 'GridLineStyle', '-');
grid on;
% plot points per timeindex (with increasing brightness):
vec_grayIndex = linspace(0,0.9,num_T);
for num_kt = 1:num_T
vec_xCoords = mat_points((mat_points(:,4) == num_kt), 1);
vec_yCoords = mat_points((mat_points(:,4) == num_kt), 2);
plot(vec_xCoords, vec_yCoords, 'o', ...
'MarkerEdgeColor', 'k', ...
'MarkerFaceColor', vec_grayIndex(num_kt) * ones(1,3) ...
);
end
Thanks :)
Why, it looks almost as if you're simulating a radar tracking debris from the collision of two missiles...
Anyway, let's coin a new term: object. Objects are moving along parabolae and at certain times they may emit flashes that appear as points. There are also other points which we are trying to filter out.
We will need some more information:
Can we assume that the objects obey the physics of things falling under gravity?
Must every object emit a point at every timestep during its lifetime?
Speaking of lifetime, do all objects begin at the same time? Can some expire before others?
How precise is the data? Is it exact? Is there a measure of error? To put it another way, do we understand how poorly the points from an object might fit a perfect parabola?
Sort the data with (index,time) as keys and for all locations of a point i see if they follow parabolic trajectory?
Which part are you facing problem? Sorting should be very easy. IMHO, it is the second part (testing if a set of points follow parabolic trajectory) that is difficult.
I've been using a function file [ret]=drawellipse(x,y,a,b,angle,steps,color,img). Calling the function through a script file to draw random ellipses in image. But once i set the random center point(x,y), and random a, b, there is high possibility that the ellipses intersection would occur. How can i prevent the intersection? (I'm supposed to draw the ellipses that are all separate from each other)
Well, over here i have a function file which is to check whether the ellipses got overlap or not,overlap = overlap_ellipses(x0,y0,a0,b0,angle0,x1,y1,a1,b1,angle1). If the two ellipses are overlap, then the 'overlap=1', otherwise 'overlap=0'.
Based on all these, i tested in the command window:
x=rand(4,1)*400; % x and y are the random coodinates for the center of ellipses
y=rand(4,1)*400;
a=[50 69 30 60]; % major axis for a and b, i intend to use random also in the future
b=[20 40 10 40]; % minor axis
angle=[30 90 45 0]; % angle of ellipse
steps=10000;
color=[255 0 0]; % inputs for another function file to draw the ellipse
img=zeros(500,500,3);
The following i want to dispaly the ellipses if overlap==0, and 'if overlap==1', decrease the a and b, till there is no intersection. Lastly, to imshow the img.
for i=1:length(x)
img=drawellipse(x(i),y(i),a(i),b(i),angle(i),steps,color,img);
end
For me now, i have difficulty in coding the middle part. How can i use the if statement to get the value of overlap and how to make the index corresponding to the ellipse i need to draw.
i tested a bit like
for k=1:(length(x)-1)
overlap = overlap_ellipses(x(1),y(1),a(1),b(1),angle(1),x(1+k),y(1+k),a(1+k),b(1+k),angle(1+k))
end
it returns
overlap=0
overlap=0
overlap=1
it is not [0 0 1]. I can't figure it out, thus stuck in the process.
The final image shoule look like the picture in this voronoi diagram of ellipses.
(There is no intersection between any two ellipses)
Assuming you are drawing the ellipses into a raster graphics image, you could calculate the pixels you would have to draw for an ellipse, check whether these pixels in the image are still of the background color, and draw the ellipse only if the answer is yes, otherwise reject it (because something else, i.e. another ellipse, is in the way) and try other x,y,a and b.
Alternatively, you could split your image into rectangles (not neccessarily of equal size) and place one ellipse in each of those, picking x,y,a,b such that no ellipse exceeds its rectangle - then the ellipses cannot overlap either, but it depends on how much "randomness" your ellipse placing should have whether this suffices.
The mathematically rigorous way would be to store x,y,a,b of each drawn ellipse and for each new ellipse, do pairwise checks with each of those whether they have common points by solving a system of two quadratic equations. However, this might be a bit complicated, especially once the angle is not 0.
Edit in response to the added code: Instead of fixing all x's and y's before the loop, you can determine them inside the loop. Since you know how many ellipses you want, but not how many you have to sample, you need a while loop. The test loop you give may come in handy, but you need to compare all previous ellipses to the one created in the loop iteration, not the first one.
i=1;
while (i<=4) %# or length(a), or, more elegantly, some pre-defined max
x(i) = rand*400; y(i) = rand*400; %# or take x and y as givren and decrease a and b
%# now, check overlap for given center
overlap = false;
for k=1:(i-1)
overlap = overlap || overlap_ellipses(x(i),y(i),a(i),b(i),angle(i),x(k),y(k),a(k),b(k),angle(k))
end
if (~overlap)
img = drawellipse(x(i),y(i),a(i),b(i),angle(i),steps,color,img);
i = i+1; %# determine next ellipse
end %# else x(i) and y(i) will be overwritten in next while loop iteration
end
Of course, if a and b are fixed, it may happen that no ellipse fits the image dimensions if the already present ones are unfortunately placed, resulting in an infinite loop.
Regarding your plan of leaving the center fixed and decreasing the ellipse's size until it fits: where does your overlap_ellipses method come from? Maybe itcan be adapted to return a factor by which one ellipse needs to be shrinked to fit next to the other (and 1 if it fits already)?
The solution proposed by #arne.b (the first one) is a good way to rasterize non-overlapping ellipses.
Let me illustrate that idea with an example. I will be extending my previous answer:
%# color image
I = imread('pears.png');
sz = size(I);
%# parameters of ellipses
num = 7;
h = zeros(1,num);
clr = lines(num); %# color of each ellipse
x = rand(num,1) .* sz(2); %# center x-coords
y = rand(num,1) .* sz(1); %# center y-coords
a = rand(num,1) .* 200; %# major axis length
b = rand(num,1) .* 200; %# minor axis length
angle = rand(num,1) .* 360; %# angle of rotation
%# label image, used to hold rasterized ellipses
BW = zeros(sz(1),sz(2));
%# randomly place ellipses one-at-a-time, skip if overlaps previous ones
figure, imshow(I)
axis on, hold on
for i=1:num
%# ellipse we would like to draw directly on image matrix
[ex,ey] = calculateEllipse(x(i),y(i), a(i),b(i), angle(i), 100);
%# lets plot the ellipse (overlayed)
h(i) = plot(ex,ey, 'LineWidth',2, 'Color',clr(i,:));
%# create mask for image pixels inside the ellipse polygon
mask = poly2mask(ex,ey,sz(1),sz(2));
%# get the perimter of this mask
mask = bwperim(mask,8);
%# skip if there is an existing overlapping ellipse
if any( BW(mask)~=0 ), continue, end
%# use the mask to place the ellipse in the label image
BW(mask) = i;
end
hold off
legend(h, cellstr(num2str((1:num)','Line%d')), 'Location','BestOutside') %'
%# set pixels corresponding to ellipses using specified colors
clr = im2uint8(clr);
II = I;
for i=1:num
BW_ind = bsxfun(#plus, find(BW==i), prod(sz(1:2)).*(0:2));
II(BW_ind) = repmat(clr(i,:), [size(BW_ind,1) 1]);
end
figure, imshow(II, 'InitialMagnification',100, 'Border','tight')
Note how the overlap test is performed in the order the ellipses are added, thus after Line1 (blue) and Line2 (green) are drawn, Line3 (red) will be skipped because it overlaps one of the previous ones, and so on for the rest...
One option is to keep track of all the ellipses already drawn, and to make sure the next set of [x,y,a,b] does not produce a new ellipse which intersects with the existing ones. You can either invoke random numbers until you come up with a set that fulfills the condition, or once you have a set which violates the condition, decrease the values of a and/or b until no intersection occurs.
Here's the problem: I have a number of binary images composed by traces of different thickness. Below there are two images to illustrate the problem:
First Image - size: 711 x 643 px
Second Image - size: 930 x 951 px
What I need is to measure the average thickness (in pixels) of the traces in the images. In fact, the average thickness of traces in an image is a somewhat subjective measure. So, what I need is a measure that have some correlation with the radius of the trace, as indicated in the figure below:
Notes
Since the measure doesn't need to be very precise, I am willing to trade precision for speed. In other words, speed is an important factor to the solution of this problem.
There might be intersections in the traces.
The trace thickness might not be constant, but an average measure is OK (even the maximum trace thickness is acceptable).
The trace will always be much longer than it is wide.
I'd suggest this algorithm:
Apply a distance transformation to the image, so that all background pixels are set to 0, all foreground pixels are set to the distance from the background
Find the local maxima in the distance transformed image. These are points in the middle of the lines. Put their pixel values (i.e. distances from the background) image into a list
Calculate the median or average of that list
I was impressed by #nikie's answer, and gave it a try ...
I simplified the algorithm for just getting the maximum value, not the mean, so evading the local maxima detection algorithm. I think this is enough if the stroke is well-behaved (although for self intersecting lines it may not be accurate).
The program in Mathematica is:
m = Import["http://imgur.com/3Zs7m.png"] (* Get image from web*)
s = Abs[ImageData[m] - 1]; (* Invert colors to detect background *)
k = DistanceTransform[Image[s]] (* White Pxs converted to distance to black*)
k // ImageAdjust (* Show the image *)
Max[ImageData[k]] (* Get the max stroke width *)
The generated result is
The numerical value (28.46 px X 2) fits pretty well my measurement of 56 px (Although your value is 100px :* )
Edit - Implemented the full algorithm
Well ... sort of ... instead of searching the local maxima, finding the fixed point of the distance transformation. Almost, but not quite completely unlike the same thing :)
m = Import["http://imgur.com/3Zs7m.png"]; (*Get image from web*)
s = Abs[ImageData[m] - 1]; (*Invert colors to detect background*)
k = DistanceTransform[Image[s]]; (*White Pxs converted to distance to black*)
Print["Distance to Background*"]
k // ImageAdjust (*Show the image*)
Print["Local Maxima"]
weights =
Binarize[FixedPoint[ImageAdjust#DistanceTransform[Image[#], .4] &,s]]
Print["Stroke Width =",
2 Mean[Select[Flatten[ImageData[k]] Flatten[ImageData[weights]], # != 0 &]]]
As you may see, the result is very similar to the previous one, obtained with the simplified algorithm.
From Here. A simple method!
3.1 Estimating Pen Width
The pen thickness may be readily estimated from the area A and perimeter length L of the foreground
T = A/(L/2)
In essence, we have reshaped the foreground into a rectangle and measured the length of the longest side. Stronger modelling of the pen, for instance, as a disc yielding circular ends, might allow greater precision, but rasterisation error would compromise the signicance.
While precision is not a major issue, we do need to consider bias and singularities.
We should therefore calculate area A and perimeter length L using functions which take into account "roundedness".
In MATLAB
A = bwarea(.)
L = bwarea(bwperim(.; 8))
Since I don't have MATLAB at hand, I made a small program in Mathematica:
m = Binarize[Import["http://imgur.com/3Zs7m.png"]] (* Get Image *)
k = Binarize[MorphologicalPerimeter[m]] (* Get Perimeter *)
p = N[2 Count[ImageData[m], Except[1], 2]/
Count[ImageData[k], Except[0], 2]] (* Calculate *)
The output is 36 Px ...
Perimeter image follows
HTH!
Its been a 3 years since the question was asked :)
following the procedure of #nikie, here is a matlab implementation of the stroke width.
clc;
clear;
close all;
I = imread('3Zs7m.png');
X = im2bw(I,0.8);
subplottight(2,2,1);
imshow(X);
Dist=bwdist(X);
subplottight(2,2,2);
imshow(Dist,[]);
RegionMax=imregionalmax(Dist);
[x, y] = find(RegionMax ~= 0);
subplottight(2,2,3);
imshow(RegionMax);
List(1:size(x))=0;
for i = 1:size(x)
List(i)=Dist(x(i),y(i));
end
fprintf('Stroke Width = %u \n',mean(List));
Assuming that the trace has constant thickness, is much longer than it is wide, is not too strongly curved and has no intersections / crossings, I suggest an edge detection algorithm which also determines the direction of the edge, then a rise/fall detector with some trigonometry and a minimization algorithm. This gives you the minimal thickness across a relatively straight part of the curve.
I guess the error to be up to 25%.
First use an edge detector that gives us the information where an edge is and which direction (in 45° or PI/4 steps) it has. This is done by filtering with 4 different 3x3 matrices (Example).
Usually I'd say it's enough to scan the image horizontally, though you could also scan vertically or diagonally.
Assuming line-by-line (horizontal) scanning, once we find an edge, we check if it's a rise (going from background to trace color) or a fall (to background). If the edge's direction is at a right angle to the direction of scanning, skip it.
If you found one rise and one fall with the correct directions and without any disturbance in between, measure the distance from the rise to the fall. If the direction is diagonal, multiply by squareroot of 2. Store this measure together with the coordinate data.
The algorithm must then search along an edge (can't find a web resource on that right now) for neighboring (by their coordinates) measurements. If there is a local minimum with a padding of maybe 4 to 5 size units to each side (a value to play with - larger: less information, smaller: more noise), this measure qualifies as a candidate. This is to ensure that the ends of the trail or a section bent too much are not taken into account.
The minimum of that would be the measurement. Plausibility check: If the trace is not too tangled, there should be a lot of values in that area.
Please comment if there are more questions. :-)
Here is an answer that works in any computer language without the need of special functions...
Basic idea: Try to fit a circle into the black areas of the image. If you can, try with a bigger circle.
Algorithm:
set image background = 0 and trace = 1
initialize array result[]
set minimalExpectedWidth
set w = minimalExpectedWidth
loop
set counter = 0
create a matrix of zeros size w x w
within a circle of diameter w in that matrix, put ones
calculate area of the circle (= PI * w)
loop through all pixels of the image
optimization: if current pixel is of background color -> continue loop
multiply the matrix with the image at each pixel (e.g. filtering the image with that matrix)
(you can do this using the current x and y position and a double for loop from 0 to w)
take the sum of the result of each multiplication
if the sum equals the calculated circle's area, increment counter by one
store in result[w - minimalExpectedWidth]
increment w by one
optimization: include algorithm from further down here
while counter is greater zero
Now the result array contains the number of matches for each tested width.
Graph it to have a look at it.
For a width of one this will be equal to the number of pixels of trace color. For a greater width value less circle areas will fit into the trace. The result array will thus steadily decrease until there is a sudden drop. This is because the filter matrix with the circular area of that width now only fits into intersections.
Right before the drop is the width of your trace. If the width is not constant, the drop will not be that sudden.
I don't have MATLAB here for testing and don't know for sure about a function to detect this sudden drop, but we do know that the decrease is continuous, so I'd take the maximum of the second derivative of the (zero-based) result array like this
Algorithm:
set maximum = 0
set widthFound = 0
set minimalExpectedWidth as above
set prevvalue = result[0]
set index = 1
set prevFirstDerivative = result[1] - prevvalue
loop until index is greater result length
firstDerivative = result[index] - prevvalue
set secondDerivative = firstDerivative - prevFirstDerivative
if secondDerivative > maximum or secondDerivative < maximum * -1
maximum = secondDerivative
widthFound = index + minimalExpectedWidth
prevFirstDerivative = firstDerivative
prevvalue = result[index]
increment index by one
return widthFound
Now widthFound is the trace width for which (in relation to width + 1) many more matches were found.
I know that this is in part covered in some of the other answers, but my description is pretty much straightforward and you don't have to have learned image processing to do it.
I have interesting solution:
Do edge detection, for edge pixels extraction.
Do physical simulation - consider edge pixels as positively charged particles.
Now put some number of free positively charged particles in the stroke area.
Calculate electrical force equations for determining movement of these free particles.
Simulate particles movement for some time until particles reach position equilibrium.
(As they will repel from both stoke edges after some time they will stay in the middle line of stoke)
Now stroke thickness/2 would be average distance from edge particle to nearest free particle.