Calculate the non-projected area inside a contour line created by Basemap - contour

I am currently trying to determine the area inside specfic contour lines on a Mollweide map projection using Basemap. Specifically, what I'm looking for is the area of various credible intervals in square degrees (or degrees2) of an astronomical event on the celestial sphere. The plot is shown below:
Fortunately, a similar question has already been answered before that helps considerably. The method outlined in the answer is able to account for holes within the contour as well which is a necessity for my use case. My adapted code for this particular method is provided below:
# generate a regular lat/lon grid.
nlats = 300; nlons = 300; delta_lon = 2.*np.pi/(nlons-1); delta_lat = np.pi/(nlats-1)
lats = (0.5*np.pi-delta_lat*np.indices((nlats,nlons))[0,:,:])
lons = (delta_lon*np.indices((nlats,nlons))[1,:,:] - np.pi)
map = Basemap(projection='moll',lon_0=0, celestial=True)
# compute native map projection coordinates of lat/lon grid
x, y = map(lons*180./np.pi, lats*180./np.pi)
areas = []
cred_ints = [0.5,0.9]
for k in range(len(cred_ints)):
cs = map.contourf(x,y,p1,levels=[0.0,cred_ints[k]]) ## p1 is the cumulative distribution across all points in the sky (usually determined via KDE on the data)
##organizing paths and computing individual areas
paths = cs.collections[0].get_paths()
#help(paths[0])
area_of_individual_polygons = []
for p in paths:
sign = 1 ##<-- assures that area of first(outer) polygon will be summed
verts = p.vertices
codes = p.codes
idx = np.where(codes==Path.MOVETO)[0]
vert_segs = np.split(verts,idx)[1:]
code_segs = np.split(codes,idx)[1:]
for code, vert in zip(code_segs,vert_segs):
##computing the area of the polygon
area_of_individual_polygons.append(sign*Polygon(vert[:-1]).area)
sign = -1 ##<-- assures that the other (inner) polygons will be subtracted
##computing total area
total_area = np.sum(area_of_individual_polygons)
print(total_area)
areas.append(total_area)
print(areas)
As far as I can tell this method works beautifully... except for one small wrinkle: this calculates the area using the projected coordinate units. I'm not entirely sure what the units are in this case but they are definitely not degrees2 (the calculated areas are on the order of 1013 units2... maybe the units are pixels?). As alluded to earlier, what I'm looking for is how to calculate the equivalent area in the global coordinate units, i.e. in degrees2.
Is there a way to convert the area calculated in the projected domain back into the global domain in square degrees? Or perhaps is there a way to modify this method so that it determines the area in degrees2 from the get go?
Any help will be greatly appreciated!

For anyone that comes across this question, while I didn't figure out a way to directly convert the projected area back into the global domain, I did develop a new solution by transforming the contour path vertices (but this time defined in the lat/lon coordinate system) via an area preserving sinusoidal projection:
where φ is the latitude, λ is the longitude, and λ0 is the longitude of the central meridian.
This flat projection means you can just use the package Shapely to determine the area of the polygon defined by the projected vertices (in square units for a radius of 1 unit, or more simply steradians). Multiplying this number by (180/π)2 will give you the area in square degrees for the contour in question.
Fortunately, only minor adjustments to the code mentioned in the OP was needed to achieve this. The final code is provided below:
# generate a regular lat/lon grid.
nlats = 300; nlons = 300;
delta_lat = np.pi/(nlats-1); delta_lon = 2.*np.pi/(nlons-1);
lats = (0.5*np.pi-delta_lat*np.indices((nlats,nlons))[0,:,:])
lons = (delta_lon*np.indices((nlats,nlons))[1,:,:])
### FOLLOWING CODE DETERMINES CREDIBLE INTERVAL SKY AREA IN DEG^2 ###
# collect and organize contour data for each credible interval
cred_ints = [0.5,0.9]
ci_areas = []
for k in range(len(cred_ints)):
cs = plt.contourf(lons,lats,p1,levels=[0,cred_ints[k]]) ## p1 is the cumulative distribution across all points in the sky (usually determined via KDE of the dataset in question)
paths = cs.collections[0].get_paths()
##organizing paths and computing individual areas
area_of_individual_polygons = []
for p in paths:
sign = 1 ##<-- assures that area of first(outer) polygon will be summed
vertices = p.vertices
codes = p.codes
idx = np.where(codes==Path.MOVETO)[0]
verts_segs = np.split(vertices,idx)[1:]
for verts in verts_segs:
# transforming the coordinates via an area preserving sinusoidal projection
x = (verts[:,0] - (0)*np.ones_like(verts[:,0]))*np.cos(verts[:,1])
y = verts[:,1]
verts_proj = np.stack((x,y), axis=1)
##computing the area of the polygon
area_of_individual_polygons.append(sign*Polygon(verts_proj[:-1]).area)
sign = -1 ##<-- assures that the other(inner) polygons/holes will be subtracted
##computing total area
total_area = ((180/np.pi)**2)*np.sum(area_of_individual_polygons)
ci_areas.append(total_area)

Related

How to find the pixel location of a GPS point in an orthoimage with known orientation and GPS location

I have a problem where I need to determine whether a given latitude, longitude GPS-point is in a given orthoimage (approx. 1 hectare area) with known real-world orientation and GPS-location (corresponding to the center of image).
That is, given a GPS-point P, I need to determine:
Is point P located in the orthoimage, and if yes,
What is the pixel location of point P in the orthoimage.
My question is summarized in the following image:
As you can see in the image, I know the GPS-coordinates of the image (center) and where North is located with respect to the image. Also, I know how many centimeters in the ground each pixel corresponds to.
My question is: What would be an efficient and smart way to achieve the goals in my problem?
One approach I had in mind was to solve a linear mapping between the GPS- and pixel-points and then use this mapping to answer both problems 1-2. I thought this could be a reasonable approach, even though the earth has curvature and the GPS-coordinates are (I'd say) more like a parabolic function of the pixel coordinates, since the distances are very small (one image is an approximately 1 hectare area) I could assume without significant loss in accuracy that the GPS-coordinates change locally linearly w.r.t pixel coordinates.
What do you think? Thank you.
Update:
The orthophotos have been taken with a Phantom 4 Pro drone with gimbal camera system.
I thought about one possibility myself, not perfect but it's a start:
The following information is given:
a rectangular orthoimage Img, Yaw of the image (that is, how many degrees the image is facing away from north), pix_size pixel size in the ground (centimeters/pixel).
The problem is: Given an arbitrary GPS-point p = (lat, long), determine the pixel location of p in Img.
Denote c = (latc, longc) and cp = (x,y) as the GPS- and pixel-coordinates of the center point of Img.
Determine how much we must move along North-South and West-East axes to get from c to p. Let lat_delta = latc-lat and long_delta = longc-long. If lat_delta < 0 -> p is more in north than c, otherwise p is more in south than c. The same goes analoguously for long_delta.
> if lat_delta < 0:
> pN = [latc + abs(lat_delta), longc]
> else:
> pN = [latc - abs(lat_delta), longc]
>
> if lat_long < 0:
> pE = [latc, longc + abs(long_delta)]
> else:
> pE = [latc, longc - abs(long_delta)]
Now the points c, p, pN and pE form a "spherical" right triangle (I think I could safely assume it to be planar because the orthophoto describes max 1 hectare area). So the Pythagorean theorem applies sufficiently enough for my purposes.
Next, I calculate the ground distances dN = Haversine(c,pN) and dE = Haversine(c, pE), which tell me how much in ground distance I must move in North-South and West-East axes in order to get from c to p.
Now I will apply a rotation matrix R(-Yaw) to vectors n = [0,1] and e = [1,0], which represent the upwards and right vectors in my pixel coordinate system. So I get nr = R(-Yaw)*n and er = R(-Yaw)*e where nr is a unit pixel vector pointing towards North in the image and er is similarly a unit pixel vector pointing towards East in the image.
Next, I calculate the ratios mN = dN/pix_size and mE = dE/pix_size (the factors also need to take into account the +- direction). Now I calculate the pixel location of p by:
pp = cp + mN*nr + mE*er,
where I can now easily check if the pixel values pp are within the bounds of the image Img.
Of course this method does not work in a general large area case and needs to be refined for this purpose.

Camera calibration: 3D to 2D points mapping

I am working on problem related to camera calibration. In the below image, we consider a world coordinate system with X-axis going leftward, Y-axis rightward and Z-axis upward. We select 15 points(x,y,z) distributed uniformly across the 3 planes. The distance between grid lines is 1 inch. We also obtain MATLAB coordinates for the 15 pixels(u,v). The objective is to obtain the 3x4 camera matrix (M) using homogeneous linear least squares and then project the world points (x,y,z) to the image (u',v') using M. I have written code to do this but the coordinates I'm obtaining (u',v') seem to be very small in magnitude compared to the actual coordinates (u,v). The RMS error is too large and the projected points don't even map onto the image anywhere near the actual points. Is there any scaling that I need to do to convert it to MATLAB coordinates? I am also including my code which isn't very well written since I am relatively new to MATLAB.
P=[];% 2nx12 matrix - 30x12 matrix
for i=1:15 %compute P
world_row = world_coords(i,:); % 3d homogeneous coordinates (x,y,z,1)
zeroelem = repelem(0,4);
image_coord = image_coords(i,:);
img_u = image_coord(1);
prod = -img_u*world_row;
row1 = [world_row,zeroelem,prod];
zeroelem = repelem(0,3);
img_v = image_coord(2);
prod = -img_v*world_row;
row2 = [0,world_row,zeroelem,prod];
P=[P;row1;row2];
end
var1 = P'*P;
[V,D] = eig(var1');//compute eigen vector corresponding to least eigen value
m = V(:,1); //unit vector of norm 1
M = reshape(m,3,4); //camera matrix of 3x4 size
%get projected points
proj = M*world_coords';
U = proj (1,:);
V = proj (2,:);
W = proj (3,:);
for i=1:15
U(i) = U(i)/W(i);
V(i) = V(i)/W(i);
end
final = [U;V];//(u',v')
I am also including the image with the 15 points I have selected. Take P1(u,v) = (286,260) and P1(x,y,z) = (4,0,3). The (u',v') I obtained for this has low values. Can anyone point me what I'm doing wrong?
It was a silly error from my me that was giving me the wrong camera matrix. I noted down the world coordinates of the point P wrongly ((7,0,1) instead of (1,0,1)). This led to wrongly formed 30x12 matrix which we use to form an equation to be solved by homogeneous linear least squares. I have obtained the calibration matrix which projects the 3D points with a low RMS error after correcting this mistake.

Compute curvature of a bent pipe using image processing (Hough transform parabola detection)

I'm trying to design a way to detect this pipe's curvature. I tried applying hough transform and found detected line but they don't lie along the surface of pipe so smoothing it out to fit a beizer curve is not working .Please suggest some good way to start for the image like this.[
The image obtained by hough transform to detect lines is as follows
[
I'm using standard Matlab code for probabilistic hough transform line detection that generates line segment surrounding the structure. Essentially the shape of pipe resembles a parabola but for hough parabola detection I need to provide eccentricity of the point prior to the detection. Please suggest a good way for finding discrete points along the curvature that can be fitted to a parabola. I have given tag to opencv and ITK so if there is function that can be implemented on this particular picture please suggest the function I will try it out to see the results.
img = imread('test2.jpg');
rawimg = rgb2gray(img);
[accum, axis_rho, axis_theta, lineprm, lineseg] = Hough_Grd(bwtu, 8, 0.01);
figure(1); imagesc(axis_theta*(180/pi), axis_rho, accum); axis xy;
xlabel('Theta (degree)'); ylabel('Pho (pixels)');
title('Accumulation Array from Hough Transform');
figure(2); imagesc(bwtu); colormap('gray'); axis image;
DrawLines_2Ends(lineseg);
title('Raw Image with Line Segments Detected');
The edge map of the image is as follows and the result generated after applying Hough transform on edge map is also not good. I was thinking a solution that does general parametric shape detection like this curve can be expressed as a family of parabola and so we do a curve fitting to estimate the coefficients as it bends to analyze it's curvature. I need to design a real time procedure so please suggest anything in this direction.
I suggest the following approach:
First stage: generate a segmentation of the pipe.
perform thresholding on the image.
find connected components in the thresholded image.
search for a connected component which represents the pipe.
The connected component which represents the pipe should have an edge map which is divided into top and bottom edges (see attached image).
The top and bottom edges should have similar size, and they should have a relatively constant distance from one another. In other words, the variance of their per-pixel distances should be low.
Second stage - extract curve
At this stage, you should extract the points of the curve for performing Beizer fitting.
You can either perform this calculation on the top edge, or the bottom edge.
another option is to do it on the skeleton of the pipe segmentation.
Results
The pipe segmentation. Top and bottom edges are mark with blue and red correspondingly.
Code
I = mat2gray(imread('ILwH7.jpg'));
im = rgb2gray(I);
%constant values to be used later on
BW_THRESHOLD = 0.64;
MIN_CC_SIZE = 50;
VAR_THRESHOLD = 2;
SIMILAR_SIZE_THRESHOLD = 0.85;
%stage 1 - thresholding & noise cleaning
bwIm = im>BW_THRESHOLD;
bwIm = imfill(bwIm,'holes');
bwIm = imopen(bwIm,strel('disk',1));
CC = bwconncomp(bwIm);
%iterates over the CC list, and searches for the CC which represents the
%pipe
for ii=1:length(CC.PixelIdxList)
%ignore small CC
if(length(CC.PixelIdxList{ii})<50)
continue;
end
%extracts CC edges
ccMask = zeros(size(bwIm));
ccMask(CC.PixelIdxList{ii}) = 1;
ccMaskEdges = edge(ccMask);
%finds connected components in the edges mat(there should be two).
%these are the top and bottom parts of the pipe.
CC2 = bwconncomp(ccMaskEdges);
if length(CC2.PixelIdxList)~=2
continue;
end
%tests that the top and bottom edges has similar sizes
s1 = length(CC2.PixelIdxList{1});
s2 = length(CC2.PixelIdxList{2});
if(min(s1,s2)/max(s1,s2) < SIMILAR_SIZE_THRESHOLD)
continue;
end
%calculate the masks of these two connected compnents
topEdgeMask = false(size(ccMask));
topEdgeMask(CC2.PixelIdxList{1}) = true;
bottomEdgeMask = false(size(ccMask));
bottomEdgeMask(CC2.PixelIdxList{2}) = true;
%tests that the variance of the distances between the points is low
topEdgeDists = bwdist(topEdgeMask);
bottomEdgeDists = bwdist(bottomEdgeMask);
var1 = std(topEdgeDists(bottomEdgeMask));
var2 = std(bottomEdgeDists(topEdgeMask));
%if the variances are low - we have found the CC of the pipe. break!
if(var1<VAR_THRESHOLD && var2<VAR_THRESHOLD)
pipeMask = ccMask;
break;
end
end
%performs median filtering on the top and bottom boundaries.
MEDIAN_SIZE =5;
[topCorveY, topCurveX] = find(topEdgeMask);
topCurveX = medfilt1(topCurveX);
topCurveY = medfilt1(topCurveY);
[bottomCorveY, bottomCurveX] = find(bottomEdgeMask);
bottomCurveX = medfilt1(bottomCurveX);
bottomCorveY = medfilt1(bottomCorveY);
%display results
imshow(pipeMask); hold on;
plot(topCurveX,topCorveY,'.-');
plot(bottomCurveX,bottomCorveY,'.-');
Comments
In this specific example, acquiring the pipe segmentation by thresholding was relatively easy. In some scenes it may be more complex. in these cases, you may want to use region growing algorithm for generating the pipe segmentation.
Detecting the connected component which represents the pipe can be done by using some more hueristics. For example - the local curvature of it's boundaries should be low.
You can find the connected components (CCs) of your inverted edge-map image. Then you can somehow filter those components, say for example, based on their pixel count, using region-properties. Here are the connected components I obtained using the given Octave code.
Now you can fit a model to each of these CCs using something like nlinfit or any suitable method.
im = imread('uFBtU.png');
gr = rgb2gray(uint8(im));
er = imerode(gr, ones(3)) < .5;
[lbl, n] = bwlabel(er, 8);
imshow(label2rgb(lbl))

How to find the order of discrete point-set efficiently?

I have a series of discrete point on a plane, However, their order is scattered. Here is an instance:
To connect them with a smooth curve, I wrote a findSmoothBoundary() to achieve the smooth boundary.
Code
function findSmoothBoundary(boundaryPointSet)
%initialize the current point
currentP = boundaryPointSet(1,:);
%Create a space smoothPointsSet to store the point
smoothPointsSet = NaN*ones(length(boundaryPointSet),2);
%delete the current point from the boundaryPointSet
boundaryPointSet(1,:) = [];
ptsNum = 1; %record the number of smoothPointsSet
smoothPointsSet(ptsNum,:) = currentP;
while ~isempty(boundaryPointSet)
%ultilize the built-in knnsearch() to
%achieve the nearest point of current point
nearestPidx = knnsearch(boundaryPointSet,currentP);
currentP = boundaryPointSet(nearestPidx,:);
ptsNum = ptsNum + 1;
smoothPointsSet(ptsNum,:) = currentP;
%delete the nearest point from boundaryPointSet
boundaryPointSet(nearestPidx,:) = [];
end
%visualize the smooth boundary
plot(smoothPointsSet(:,1),smoothPointsSet(:,2))
axis equal
end
Although findSmoothBoundary() can find the smooth boundary rightly, but its efficiency is much lower ( About the data, please see here)
So I would like to know:
How to find the discrete point order effieciently?
Data
theta = linspace(0,2*pi,1000)';
boundaryPointSet= [2*sin(theta),cos(theta)];
tic;
findSmoothBoundary(boundaryPointSet)
toc;
%Elapsed time is 4.570719 seconds.
This answer is not perfect because I'll have to make a few hypothesis in order for it to work. However, for a vast majority of cases, it should works as intended. Moreover, from the link you gave in the comments, I think these hypothesis are at least weak, if not verified by definition :
1. The point form a single connected region
2. The center of mass of your points lies in the convex hull of those points
If these hypothesis are respected, you can do the following (Full code available at the end):
Step 1 : Calculate the center of mass of your points
Means=mean(boundaryPointSet);
Step 2 : Change variables to set the origin to the center of mass
boundaryPointSet(:,1)=boundaryPointSet(:,1)-Means(1);
boundaryPointSet(:,2)=boundaryPointSet(:,2)-Means(2);
Step3 : Convert coordinates to polar
[Angles,Radius]=cart2pol(boundaryPointSet(:,1),boundaryPointSet(:,2));
Step4 : Sort the Angle and use this sorting to sort the Radius
[newAngles,ids]=sort(Angles);
newRadius=Radius(ids);
Step5 : Go back to cartesian coordinates and re-add the coordinates of the center of mass:
[X,Y]=pol2cart(newAngles,newRadius);
X=X+Means(1);
Y=Y+means(2);
Full Code
%%% Find smooth boundary
fid=fopen('SmoothBoundary.txt');
A=textscan(fid,'%f %f','delimiter',',');
boundaryPointSet=cell2mat(A);
boundaryPointSet(any(isnan(boundaryPointSet),2),:)=[];
idx=randperm(size(boundaryPointSet,1));
boundaryPointSet=boundaryPointSet(idx,:);
tic
plot(boundaryPointSet(:,1),boundaryPointSet(:,2))
%% Find mean value of all parameters
Means=mean(boundaryPointSet);
%% Center values around Mean point
boundaryPointSet(:,1)=boundaryPointSet(:,1)-Means(1);
boundaryPointSet(:,2)=boundaryPointSet(:,2)-Means(2);
%% Get polar coordinates of your points
[Angles,Radius]=cart2pol(boundaryPointSet(:,1),boundaryPointSet(:,2));
[newAngles,ids]=sort(Angles);
newRadius=Radius(ids);
[X,Y]=pol2cart(newAngles,newRadius);
X=X+Means(1);
Y=Y+means(2);
toc
figure
plot(X,Y);
Note : As your values are already sorted in your input file, I had to mess it up a bit by permutating them
Outputs :
Boundary
Elapsed time is 0.131808 seconds.
Messed Input :
Output :

Recognizing edges based on points and normals

I have a bit of a problem categorizing points based on relative normals.
What I would like to do is use the information I got below to fit a simplified polygon to the points, with a bias towards 90 degree angles to an extent.
I have the rough (although not very accurate) normal lines for each point, but I'm not sure how to separate the data base on closeness of points and closeness of the normals. I plan to do a linear regression after chunking the points for each face, as the normal lines sometimes does not fit well with the actual faces (although they are close to each other for each face)
Example:
alt text http://a.imageshack.us/img842/8439/ptnormals.png
Ideally, I would like to be able to fit a rectangle around this data. However, the polygon does not need to be convex, nor does it have to be aligned with the axis.
Any hints as to how to achieve something like this would be awesome.
Thanks in advance
I am not sure if this is what you are looking for, but here's my attempt at solving the problem as I understood it:
I am using the angles of the normal vectors to find points belonging to each side of the rectangle (left, right, up, down), then simply fit a line to each.
%# create random data (replace those with your actual data)
num = randi([10 20]);
pT = zeros(num,2);
pT(:,1) = rand(num,1);
pT(:,2) = ones(num,1) + 0.01*randn(num,1);
aT = 90 + 10*randn(num,1);
num = randi([10 20]);
pB = zeros(num,2);
pB(:,1) = rand(num,1);
pB(:,2) = zeros(num,1) + 0.01*randn(num,1);
aB = 270 + 10*randn(num,1);
num = randi([10 20]);
pR = zeros(num,2);
pR(:,1) = ones(num,1) + 0.01*randn(num,1);
pR(:,2) = rand(num,1);
aR = 0 + 10*randn(num,1);
num = randi([10 20]);
pL = zeros(num,2);
pL(:,1) = zeros(num,1) + 0.01*randn(num,1);
pL(:,2) = rand(num,1);
aL = 180 + 10*randn(num,1);
pts = [pT;pR;pB;pL]; %# x/y coords
angle = mod([aT;aR;aB;aL],360); %# angle in degrees [0,360]
%# plot points and normals
plot(pts(:,1), pts(:,2), 'o'), hold on
theta = angle * pi / 180;
quiver(pts(:,1), pts(:,2), cos(theta), sin(theta), 0.4, 'Color','g')
hold off
%# divide points based on angle
[~,bin] = histc(angle,[0 45 135 225 315 360]);
bin(bin==5) = 1; %# combine last and first bin
%# fit line to each segment
hold on
for i=1:4
%# indices of points in this segment
idx = ( bin == i );
%# x/y or y/x
if i==2||i==4, xx=1; yy=2; else xx=2; yy=1; end
%# fit line
coeff = polyfit(pts(idx,xx), pts(idx,yy), 1);
fit(:,1) = 0:0.05:1;
fit(:,2) = polyval(coeff, fit(:,1));
%# plot fitted line
plot(fit(:,xx), fit(:,yy), 'Color','r', 'LineWidth',2)
end
hold off
I'd try the following
Cluster the points based on proximity and similar angle. I'd use single-linkage hierarchical clustering (LINKAGE in Matlab), since you don't know a priori how many edges there will be. Single linkage favors linear structures, which is exactly what you're looking for. As the distance criterion between two points you can use the euclidean distance between point coordinates multiplied by a function of the angle that increases very steeply as soon as the angle differs more than, say, 20 or 30 degrees.
Do (robust) linear regression into the data. Using the normals may or may not help. My guess is that they won't help too much. For simplicity, you may want to disregard the normals initially.
Find the intersections between the lines.
If you have to, you can always try and improve the fit, for example by constraining opposite lines to be parallel.
If that fails, you could try and implement the approach in THIS PAPER, which allows fitting multiple straight lines at once.
You could get the mean value for the X and Y coordinates for each side and then just make lines based on that.

Resources