H3 polyfill is skipping the areas near the boundary I am trying to polyfill. (Python) - h3

Is there a way to polyfill edge to edge or as much as possible (Res 8) without increasing the resolution.
Black boundary is the polygon boundary I am trying to polyfill completely

Why this happens
This happens because polyfill only returns cells whose centroid falls into the polygon.
What to do instead
In your case, you could use polyfill on a finer resolution and then use h3_to_parent to convert back to your desired resolution.
Keep in mind that this isn't guaranteed to cover the whole geometry. But in practice, using sufficiently fine resolution for polyfill usually does the trick (and the fact that some cells with only a tiny intersection might not generated is often even useful).
Example
Here is an example using Geopandas and H3-Pandas, but of course, the logic works with any implementation of the H3 API.
import geopandas as gpd
import h3pandas
# Choose a random country boundary (Tanzania)
gdf = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres')).iloc[1:2]
Simple polyfill
gdf.h3.polyfill_resample(2)
"Finer" polyfill
gdf.h3.polyfill_resample(4).h3.h3_to_parent_aggregate(2)

Related

Creating a fence diagram in Mayavi or Matplotlib

working with Matplotlib I have produced some resistivity cross sections of the soil, obtaining pictures like this:
Now I would like to display all those sections in 3D so as to visualise better the spatial distribution of resistivity in the field (i.e. a so-called fence diagram). I would also like to plot the 2D map of the site where those measurements were carried out at the base of my plot (say on the XY plane).
As far as I have seen this is not feasible (or at least not convenient) with Matplotlib in 3D hence I decided to switch to Mayavi.
My questions are:
is it feasible georeferenced rasters and then properly place them on the correct (vertical) planes (not necessarily parallel to the cartesian ones) with Mayavi? Does imshow() serves this purpose?
is it better to recreate the contours in Mayavi at the proper locations? If this is the case I did not find a function to create contours from unstructured data (the input images were created with tricontour/tricontourf in Matplotlib). I do not think interpolating over a structured grid in scipy would do, given the non convex domain.
Ok, answering my own question:
mesh = mlab.triangular_mesh
surf = mlab.pipeline.surface(mesh)
seems to do the job.
To be consistent with the previous work, the triangulation, duly masked, can be directly imported from Matplotlib.

geopandas rasterize shpefile

I am looking for the very simplest way to rasterise a shpfile in geopandas - the equivalent to arcpy PolygonToRaster_conversion() which does things in one line.
I have found some relatively involved methods eg
https://snorfalorpagus.net/blog/2014/11/09/masking-rasterio-layers-with-vector-features/
is it this complicated? or is there a one line option like arcpy's PolygonToRaster_conversion()
I'm looking for the simplest starting point to get the idea
I've been exploring rasterio to do this, but perhaps there are other ways
I'm only just starting to use Geopandas and would appreciate any pointers
Are you trying to rasterize a set of polygons with unique values in one step? If so, you want to rasterize using that unique value for each polygon, but beware that the last polygon rasterized to a given pixel will "claim" it (i.e., multiple polygons may touch a pixel, but the last one in your list of features will be the value rasterized there).
Or do you want to rasterize each polygon independently (or all polygons at the same time, as if they were a single polygon), so that you can extract out statistics from the raster? Mask may work for this, in a loop over each feature.
The closest you are likely to get to a one-line operation is using rasterio's rio mask or rio rasterize operation. The reason that the example you link to is more involved is that you need to do a few extra things to extract a subset of your original raster. There are now a few extra methods in rasterio that make that a bit easier (docs).
From geopandas, your geometry is in a GeoSeries. I haven't tested this directly, but you may need to call the __geo_interface__ of the series to get back GeoJSON-like shapes that rasterio expects as input.

How to increase the coordinate resolution of a d3-geo chart

I have a GeoJSON file with small details and features that I want to render using D3. Unfortunately, important details are lost because D3
removes polygon coordinate pairs that are closely spaced.
I've set up a small example to show this. Both links use the exact same GeoJSON data, rendered with both D3-geo and mapbox through github.
Specifically, notice the two areas marked by the red circles.
https://bl.ocks.org/alvra/eebb06be793bc06ff3ae01e6945298b6
https://gist.github.com/alvra/eebb06be793bc06ff3ae01e6945298b6
The top one one marks a part of polygon that is rounded using many closely spaced coordinate pairs, but D3 removes most points and just draws a rough square end.
The lower red circle marks a tiny triangle that is removed altogether. The adjacent polygons should touch exactly, but are also affected by D3's loss of precision.
I haven't found any documentation about D3's coordinate precision or a (configurable) feature size limit.
I've tried decreasing D3-geo's EPSILON and related EPSILON2 values and that removes this problem (for me), although I'm sure even smaller features will still be affected.
Assuming this is related to the fact that D3 uses proper geodesics for polygon segments, while the other mapping libraries just draw straight lines (in the output coordinate space),
I was hoping that this process can only introduce new points.
I haven't been able to find other users experiencing similar problems with small features, although I'm surprised this has never come up before.
Does anyone have an idea about the proper way to deal with this?
Through epsilon, I've narrowed the problem down to this use of pointEqual(). This indicates the problem is with clipCircle considering closely spaced coordinates equal and removes them.
Indeed, if I disable circular clipping projection.clipAngle(null), the problem disappears.

Matching a curve pattern to the edges of an image

I have a target image to be searched for a curve along its edges and a template image that contains the curve. What I need to achieve is to find the best match of the curve in the template image within the target image, and based on the score, to find out whether there is a match or not. That also includes rotation and resizing of the curve. The target image can be the output of a Canny Edge detector if that makes things easier.
I am considering to use OpenCV (by using Python or Processing/Java or if those have limited access to the required functions then by using C) to make things practical and efficient, however could not find out if I can use any functions (or a combination of them) in OpenCV that are useable for doing this job. I have been reading through the OpenCV documentation and thought at first that Contours could do this job, however all the examples show closed shapes as opposed to my case where I need to match a open curve to a part of an edge.
So is there a way to do this either by using OpenCV or with any known code or algorithm that you would suggest?
Here are some images to illustrate the problem:
My first thought was Generalized Hough Transform. However I don't know any good implementation for that.
I would try SIFT or SURF first on the canny edge image. It usually is used to find 2d areas, not 1d contours, but if you take the minimum bounding box around your contour and use that as the search pattern, it should work.
OpenCV has an implementation for that:
Features2D + Homography to find a known object
A problem may be getting a good edge image, those black shapes in the back could be distracting.
Also see this Stackoverflow answer:
Image Processing: Algorithm Improvement for 'Coca-Cola Can' Recognition

Find tunnel 'center line'?

I have some map files consisting of 'polylines' (each line is just a list of vertices) representing tunnels, and I want to try and find the tunnel 'center line' (shown, roughly, in red below).
I've had some success in the past using Delaunay triangulation but I'd like to avoid that method as it does not (in general) allow for easy/frequent modification of my map data.
Any ideas on how I might be able to do this?
An "algorithm" that works well with localized data changes.
The critic's view
The Good
The nice part is that it uses a mixture of image processing and graph operations available in most libraries, may be parallelized easily, is reasonable fast, may be tuned to use a relatively small memory footprint and doesn't have to be recalculated outside the modified area if you store the intermediate results.
The Bad
I wrote "algorithm", in quotes, just because I developed it and surely is not robust enough to cope with pathological cases. If your graph has a lot of cycles you may end up with some phantom lines. More on this and examples later.
And The Ugly
The ugly part is that you need to be able to flood fill the map, which is not always possible. I posted a comment a few days ago asking if your graphs can be flood filled, but didn't receive an answer. So I decided to post it anyway.
The Sketch
The idea is:
Use image processing to get a fine line of pixels representing the center path
Partition the image in chunks commensurated to the tunnel thinnest passages
At each partition, represent a point at the "center of mass" of the contained pixels
Use those pixels to represent the Vertices of a Graph
Add Edges to the Graph based on a "near neighbour" policy
Remove spurious small cycles in the induced Graph
End- The remaining Edges represent your desired path
The parallelization opportunity arises from the fact that the partitions may be computed in standalone processes, and the resulting graph may be partitioned to find the small cycles that need to be removed. These factors also allow to reduce the memory needed by serializing instead of doing calcs in parallel, but I didn't go trough this.
The Plot
I'll no provide pseudocode, as the difficult part is just that not covered by your libraries. Instead of pseudocode I'll post the images resulting from the successive steps.
I wrote the program in Mathematica, and I can post it if is of some service to you.
A- Start with a nice flood filled tunnel image
B- Apply a Distance Transformation
The Distance Transformation gives the distance transform of image, where the value of each pixel is replaced by its distance to the nearest background pixel.
You can see that our desired path is the Local Maxima within the tunnel
C- Convolve the image with an appropriate kernel
The selected kernel is a Laplacian-of-Gaussian kernel of pixel radius 2. It has the magic property of enhancing the gray level edges, as you can see below.
D- Cutoff gray levels and Binarize the image
To get a nice view of the center line!
Comment
Perhaps that is enough for you, as you ay know how to transform a thin line to an approximate piecewise segments sequence. As that is not the case for me, I continued this path to get the desired segments.
E- Image Partition
Here is when some advantages of the algorithm show up: you may start using parallel processing or decide to process each segment at a time. You may also compare the resulting segments with the previous run and re-use the previous results
F- Center of Mass detection
All the white points in each sub-image are replaced by only one point at the center of mass
XCM = (Σ i∈Points Xi)/NumPoints
YCM = (Σ i∈Points Yi)/NumPoints
The white pixels are difficult to see (asymptotically difficult with param "a" age), but there they are.
G- Graph setup from Vertices
Form a Graph using the selected points as Vertex. Still no Edges.
H- select Candidate Edges
Using the Euclidean Distance between points, select candidate edges. A cutoff is used to select an appropriate set of Edges. Here we are using 1.5 the subimagesize.
As you can see the resulting Graph have a few small cycles that we are going to remove in the next step.
H- Remove Small Cycles
Using a Cycle detection routine we remove the small cycles up to a certain length. The cutoff length depends on a few parms and you should figure it empirically for your graphs family
I- That's it!
You can see that the resulting center line is shifted a little bit upwards. The reason is that I'm superimposing images of different type in Mathematica ... and I gave up trying to convince the program to do what I want :)
A Few Shots
As I did the testing, I collected a few images. They are probably the most un-tunnelish things in the world, but my Tunnels-101 went astray.
Anyway, here they are. Remember that I have a displacement of a few pixels upwards ...
HTH !
.
Update
Just in case you have access to Mathematica 8 (I got it today) there is a new function Thinning. Just look:
This is a pretty classic skeletonization problem; there are lots of algorithms available. Some algorithms work in principle on outline contours, but since almost everyone uses them on images, I'm not sure how available such things will be. Anyway, if you can just plot and fill the sewer outlines and then use a skeletonization algorithm, you could get something close to the midline (within pixel resolution).
Then you could walk along those lines and do a binary search with circles until you hit at least two separate line segments (three if you're at a branch point). The midpoint of the two spots you first hit, or the center of a circle touching the three points you first hit, is a good estimate of the center.
Well in Python using package skimage it is an easy task as follows.
import pylab as pl
from skimage import morphology as mp
tun = 1-pl.imread('tunnel.png')[...,0] #your tunnel image
skl = mp.medial_axis(tun) #skeleton
pl.subplot(121)
pl.imshow(tun,cmap=pl.cm.gray)
pl.subplot(122)
pl.imshow(skl,cmap=pl.cm.gray)
pl.show()

Resources