Use the H3 cell index to create a bounding box and perform a point in bbox operation - h3

With latitude and longitude, there is the possbility to create a bounding box based on xmax/ymax and xmin/ymin.
Having coordinates, I can perform a range search to check, if these coordinates are withing the bounding box. Something like
xmax >= longitude && xmin <= longitude && ymax >= latitude && ymin <= latitude
If all of this is true, I know, my point falls within the bounding box.
I wonder if there is similar possibility, using the index of h3.
If I define the xmax/ymax and xmin/ymin with the index of the corresponding cell:
topLeftCorner: 8b2d55c256acfff
bottomRightCorner: 8b2d024758b1fff
Could I then use the way the cell index is constructed to perform a similar range search, like with real coordinates?
Something like (pseudo code):
point = 8b2d11c1599bfff
if(point[0:4] === topLeftCorner[0:4] && ....

Answered separately here: https://github.com/uber/h3/issues/722
The order of the indexes does not support this directly. You might be able to do this using local IJ coordinates - see cellToLocalIJ. Note that there are areas of the world (around pentagons) where this may not work well, but in local areas this should be possible.
In general, though, I think the simpler option when you want to check for inclusion in a large region is to have a reverse index - e.g. a table, map, or set of the indexes in the region. Determining if a point is in the area is then just a set inclusion check. It might also be simpler or more efficient to actually check a lat/lng bounding box (based on the cell center lat/lng) and then have a second, more expensive check to determine whether the index is in the region.

Related

Matching path in terrain

I need to match a path recorded by lidar (x/y/height) onto a map tile (pixel height field)
I can assume the problem is 2.5D (ie a unique height for each point, no caverns) and the region is small enough that the grid is uniform (don't need to consider curvature). Naturally the track data is noisy and I don't have any known locations in advance.
Rather than do a full 3D point based Iterative Closest Point are there any simple algorithms for purely surface path matching I should take a look at ?
Specifically it seems to be an image processing problem(x,y,height=intensity) so some sort of snake matching algorithm?
Instead of brute force (calculate error for full path from each starting point) you can only expand the best points and potentially save a lot of work.
Normalize:
Subtract (pathMinX, pathMinY) from all points in the path.
Subtract (gridMinX, gridMinY) from all points in the grid.
Find pathMaxX, pathMaxY, gridMaxX, gridMaxY.
deltaMaxX = gridMaxX - pathMaxX, deltaMaxY = gridMaxY - pathMaxY.
Create an array or list with (deltaMaxX + 1) * (deltaMaxY + 1) nodes for all the combinations of deltaX and deltaY. Each node has to hold the following information:
index, initialize with 0
deltaX and deltaY, initialize with the loop counters
error, initialize with (path[0].height - grid[path[0].x + deltaX][path[0].y + deltaY].height)^2
Sort the array by error.
While arr[0].error <= arr[1].error:
arr[0].index++
if arr[0].index == n: return (arr[0].deltaX, arr[0].deltaY)
arr[0].error += (path[arr[0].index].height - grid[path[arr[0].index].x + arr[0].deltaX][path[arr[0].index].y + arr[0].deltaY].height)^2
Remove node 0 and insert it at the correct position, so that the array becomes sorted again.
Repeat step 6. and 7. until a solution was returned.
You can (should, I think it's worth it) further improve the algorithm if you first sort the points in the path by extreme heights (lowest and highest points first and then moving towards the average height in regard to the grid, e.g. sort by abs(height - averageGridHeight) descending). Like this large errors are produced early and therefore branches can be cut a lot earlier.
Purely, theoretical, but what if you encode the 3D data as grayscale images (3D path as grayscale image, height map is already something like that probably, just need to ensure scales make sense).
If you have the 3D path as a grayscale image and the height map as a grayscale image perhaps you could do a needle in haystack search using computer vision techniques. For example in, OpenCV there are a couple of techniques for finding a subimage in a larger image:
Template Matching - (overly simplified) uses a sliding window doing pixel comparison
Chamfer Matching - making more use of edges - probably more suitable for your goal. bare in mind I ran into an allocation bug last time using this: it does work in the end, but needs a bit of love (doing malloc fixes and keeping track of the cost to get read of false positives) but there are options to handle the scale difference
Template matching examples:
Chamfer matching example:

How to pick the highest point in DC.JS scatterplot using brush

I'm using DC.JS scatterplots to let users select points of interest. If you use elastic axis you cannot select the highest value point. Look at the DC.JS example (https://dc-js.github.io/dc.js/examples/scatter-brushing.html). You cannot select the highest point in the left or right plot.
In several cases, the highest or lowest point(s) is exactly what people need to be able to select because those are the outliers we care about. If you disable elastic axis and make sure you specify a range that is higher than the max value, you can select the point.
Is there another solution besides setting the axis domain based on current min/max and expanding them little bit? This is sometimes ugly when the minimum=0 and now your domain needs to include some small negative number.
--Nico
Always when I face this issue, I increase the y domain by 5% manually.
For instance:
var balanceDomain = d3.scale.linear().domain([0, s.balanceDimension.top(1)[0].balance + (s.balanceDimension.top(1)[0].balance*0.05)]);
s.amountOverallScore
.width(400)
.height(400)
.x(someDomain)
.y(balanceDomain)
...
Maybe this is not the best solution, but always work for me.
Hope it helps (=.
In my application the values were always positive and I used the following to get correct behavior:
// using reductio on the all_grp to get easy access to filtered min,max,avg,etc.
totalTimeMinValue = all_grp.top(1)[0].value.min;
totalTimeMaxValue = all_grp.top(1)[0].value.max;
// now use it to scale the charts we want
detail1_chart.y(d3.scale.linear().domain([totalTimeMinValue-1, totalTimeMaxValue+1]));
detail3_chart.y(d3.scale.linear().domain([totalTimeMinValue-1, totalTimeMaxValue+1]));
This keeps both charts in sink. An additional benefit was that my rather large dots (symbolsize=15) are no longer being clipped.
Thanks Roger.

Field of view/ convexity map

On a shape from a logical image, I am trying to extract the field of view from any point inside the shape on matlab :
I tried something involving to test each line going through the point but it is really really long.(I hope to do it for each points of the shape or at least each point of it's contour wich is quite a few times)
I think a faster method would be working iteratively by the expansion of a disk from the considered point but I am not sure how to do it.
How can I find this field of view in an efficient way?
Any ideas or solution would be appreciated, thanks.
Here is a possible approach (the principle behind the function I wrote, available on Matlab Central):
I created this test image and an arbitrary point of view:
testscene=zeros(500);
testscene(80:120,80:120)=1;
testscene(200:250,400:450)=1;
testscene(380:450,200:270)=1;
viewpoint=[250, 300];
imsize=size(testscene); % checks the size of the image
It looks like this (the circle marks the view point I chose):
The next line computes the longest distance to the edge of the image from the viewpoint:
maxdist=max([norm(viewpoint), norm(viewpoint-[1 imsize(2)]), norm(viewpoint-[imsize(1) 1]), norm(viewpoint-imsize)]);
angles=1:360; % use smaller increment to increase resolution
Then generate a set of points uniformly distributed around the viewpoint.:
endpoints=bsxfun(#plus, maxdist*[cosd(angles)' sind(angles)'], viewpoint);
for k=1:numel(angles)
[CX,CY,C] = improfile(testscene,[viewpoint(1), endpoints(k,1)],[viewpoint(2), endpoints(k,2)]);
idx=find(C);
intersec(k,:)=[CX(idx(1)), CY(idx(1))];
end
What this does is drawing lines from the view point to each directions specified in the array angles and look for the position of the intersection with an obstacle or the edge of the image.
This should help visualizing the process:
Finally, let's use the built-in roipoly function to create a binary mask from a set of coordinates:
FieldofView = roipoly(testscene,intersec(:,1),intersec(:,2));
Here is how it looks like (obstacles in white, visible field in gray, viewpoint in red):

d3 world map -- calculate geo bounds on zoom

I am trying to adapt Mike Bostock's constrained zoom example to fit my needs. (http://bl.ocks.org/mbostock/4987520). Is there any way to calculate the geo bounds (in long/lat) of the projection when the map is zoomed. The d3.geo.bounds() method expects a 'feature' -- I really don't want to zoom on any particular feature. All I want is the geo bounds for the visible area of the projection.
Thanks in advance,
Kishore
My other answer was a misreading of the question, but I'll leave it there in case someone else misreads the question in the same way.
To find the bounding box of the visual area of your map on screen, simply use the projection.invert() function and feed it the top-left and bottom-right corners of your SVG. If you have a 500x500 SVG, then that looks like this:
projection.invert([0,0])
projection.invert([500,500])
This is a bounding box of your screen, in lat-long (or whatever coordinate system you're using).
After that, you can get the bounds of your features and test to see if they are fully-contained or intersecting or have their centroid within those bounds. I'm not going to explain how to do that here, because that's a different question with many different answers depending on which definition of "within these bounds" you decide on.
I'm not aware of any built in functionality to give the bounds of a set of features, but here's a pretty simple function that does that:
function boundingExtent(features) {
var boundExtent = [[0,0],[0,0]];
for (var x in features) {
thisBounds = d3.geo.bounds(features[x]);
boundExtent[0][0] = Math.min(thisBounds[0][0],boundExtent[0][0]);
boundExtent[0][1] = Math.min(thisBounds[0][1],boundExtent[0][1]);
boundExtent[1][0] = Math.max(thisBounds[1][0],boundExtent[1][0]);
boundExtent[1][1] = Math.max(thisBounds[1][1],boundExtent[1][1]);
}
return boundExtent;
}
With that, you can just pass the array of features to boundingExtent(featureArray) and it will give you back a bounding box for your entire set.

What does cluster.size do in D3JS?

I am trying to create a graph based on Mike Bostock's Heirarchical Edge Bundling(here is the gist). I need to make my JSON look as readme-flare-imports.json looks, but I can't figure out what "size" is. I read the API and it didn't seem to help me. Also, it will be a dynamic JSON file based on a mySQL database, so I won't be able to set the size myself. Is anybody able to clear things up for me as to what it is or how I may be able to determine what the size should be? Thank you in advance!
cluster.size determines how large of an area the cluster will take up. You pass values to it like so
// The angle
var x = 360;
// The radius
var y = window.height / 2;
cluster.size([x, y])
x will determine how much of a circle the cluster will use to branch out children. A value of 360 will use the entire circle to display all values. A value of 180 will only use half the circle to branch out values.
y will determine how wide the circle will become in any single direction, i.e., the radius of the circle.
In the Heirarchical Edge Bundling example, I believe the size attribute in the json file is ignored as I could not find anything in the code that cared about it.

Resources