Best way to determine actual H3 hexagon size? - h3

What is the best way to get the actual size of an H3 hexagon (area or edge length) given an H3Index and not the average constant provided by the H3 API?

Update:
H3 now includes functions for exact area and edge length measurements - documentation here.
Original Answer:
At present, the H3 library doesn't expose any functions for this (though we've considered it, and may add in the future). At present, we've decided that the H3 library is no better placed to calculate pure geographic areas/distances/etc than any other geo library, so the best option is to find another geo library in your language of choice and apply it to the output of h3ToGeoBoundary.
You can see an example of doing this in JavaScript in this Observable notebook.

Related

How would I modify the H3 library to change the base cell orientation?

The H3 library uses a Dymaxion orientation, which means that the hexagon grid is rotated to an unusual angle relative to the equator/meridian lines. This makes sense when modelling the Earth, as the twelve pentagons then all lie in the water, but would be unnecessary when using the library to map other spheres (like the sky or other planets). In this case it would be more intuitive and aesthetically pleasing to align the icosahedron to put a pentagon at the poles and along the meridian. I'm just trying to work out what I would need to change in the library to achieve that? It looks like I would need to recalculate the faceCenterGeo and faceCenterPoint tables in faceijk.c, but do I need to recalculate faceAxesAzRadsCII as well? I don't really understand what that latter table is...
Per this related answer, the main changes you'd need for other planets are to change the radius of the sphere (only necessary if you want to calculate distances or areas) and, as you ask, the orientation of the icosahedron. For the latter:
faceCenterGeo defines the icosahedron orientation in lat/lng points
faceCenterPoint is a table derived from faceCenterGeo that defines the center of each face as 3d coords on a unit sphere. You could create your own derivation using generateFaceCenterPoint.c
faceAxesAzRadsCII is a table derived from faceCenterGeo that defines the angle from each face center to each of its three vertices. This does not have a generation script, and TBH I don't know how it was originally generated. It's used in the core algorithms translating between grid coordinates and geo coordinates, however, so you'd definitely need to update it.
I'd strongly suggest that taking this approach is a Bad Idea:
It's a fair amount of work - not (just) the calculations, but recompiling the code, maintaining a fork, possibly writing bindings in other languages for your fork, etc.
You'd break most tests involving geo input or output, so you'd be flying blind as to whether your updated code is working as expected.
You wouldn't be able to take advantage of other projects built on H3, e.g. bindings for other languages and databases.
If you want to re-orient the geometry for H3, I'd suggest doing exactly that - apply a transform to the input geo coordinates you send to H3, and a reverse transform to the output geo coordinates you get from H3. This has a bunch of advantages over modifying the library code:
It's a lot easier
You could continue to use the maintained library
You could apply these transformations outside of the bindings, in the language of your choice
Your own code is well-separated from 3rd-party library code
There's probably a very small performance penalty to this approach, but in almost all cases that's a tiny price to pay compared to the difficulties you avoid.

in the galsim lib, what does the applyShear do?

does it do the same image distortion as applyTransformation?
is there still this function applyTransformation?
and about the applyShear function,
it only apply shear, no other lensing effect, i.e. no magnification, or no other higher order effects, right?
All the written documentation, examples and source code can be found on GitHub, at the following location. https://github.com/GalSim-developers/GalSim
I suggest looking there first, and if there aren't any docs that answer your question directly, you could always study the source code for the applyShear functionality you are describing.
Good luck
In version 1.1, the methods applyShear and applyTransformation were deprecated. The preferred methods to use are now shear and transform.
The shear method is typically used as sheared_obj = obj.shear(g1=g1, g2=g2) where g1, g2 are the components of the reduced shear to be applied. You can also give e1,e2 (distortions rather than shear), or g, beta or e, beta (giving the magnitude and position angle), among other possibilities. See the docs for the Shear class for more information about ways to specify a shear in GalSim.
However you specify the shear, the shear method will shear the given surface brightness profile by that amount in a manner that preserves total flux.
The transform method is somewhat more general in that you can transform by any arbitrary 2x2 coordinate transformation matrix. Specifically, you specify an arbitrary Jacobian: dudx, dudy, dvdx, dvdy, where (x,y) are the original coordinates and (u,v) are the transformed coordinates. With this method, you could apply a transformation that is equivalent to a shear, but you would need to manually calculate the correct terms in the Jacobian.
The other difference is that transform does not necessarily preserve flux. The flux is only preserved if the Jacobian has unit determinant. So depending on your use case, you may want to rescale the flux by the determinant when you are done.
My guess is that you will most often want to use shear rather than transform. Here is a sample use case (taken from demo5.py in the GalSim examples directory):
gal = galsim.Exponential(flux=1., half_light_radius=gal_re)
[...]
for [...]:
# Now in a loop to draw many versions of this galaxy with different shears, etc.
# Make a new copy of the galaxy with an applied e1/e2-type distortion
# by specifying the ellipticity and a real-space position angle
this_gal = gal.shear(e=ellip, beta=beta)
# Apply the gravitational reduced shear by specifying g1/g2
this_gal = this_gal.shear(g1=gal_g1, g2=gal_g2)
[...]
Hope this helps.
Note: The links above are current as of December, 2014. If they go stale, try navigating from the top level of the Doxygen documentation, which hopefully will still work.
I would just like to add to the response above from ne1410s (I don't have the reputation points to comment on his or her post).
In addition to that excellent information, there are thorough docstrings for most of the standard functionality, which you can check out using the python help() routine, i.e.
import galsim
help(galsim.GSObject.applyShear)
In the case of applyShear(), this routine is deprecated in the latest version of GalSim, but the docstring still exists and points the user to the new functionality. For older versions of GalSim, there is a complete docstring that explains how applyShear() works.
To summarize, going to the GalSim repository, you can get help in the following ways:
Go through the demos in examples/ (they are extensively commented).
Go through the quick reference guide in doc/
Go to the complete documentation including all docstrings, at http://galsim-developers.github.io/GalSim/
And then there are the docstrings for individual routines, which are accessible via help(). It should not, in general, be necessary to read the source code to learn how routines work.

algorithm - warping image to another image and calculate similarity measure

I have a query on calculation of best matching point of one image to another image through intensity based registration. I'd like to have some comments on my algorithm:
Compute the warp matrix at this iteration
For every point of the image A,
2a. We warp the particular image A pixel coordinates with the warp matrix to image B
2b. Perform interpolation to get the corresponding intensity form image B if warped point coordinate is in image B.
2c. Calculate the similarity measure value between warped pixel A intensity and warped image B intensity
Cycle through every pixel in image A
Cycle through every possible rotation and translation
Would this be okay? Is there any relevant opencv code we can reference?
Comments on algorithm
Your algorithm appears good although you will have to be careful about:
Edge effects: You need to make sure that the algorithm does not favour matches where most of image A does not overlap image B. e.g. you may wish to compute the average similarity measure and constrain the transformation to make sure that at least 50% of pixels overlap.
Computational complexity. There may be a lot of possible translations and rotations to consider and this algorithm may be too slow in practice.
Type of warp. Depending on your application you may also need to consider perspective/lighting changes as well as translation and rotation.
Acceleration
A similar algorithm is commonly used in video encoders, although most will ignore rotations/perspective changes and just search for translations.
One approach that is quite commonly used is to do a gradient search for the best match. In other words, try tweaking the translation/rotation in a few different ways (e.g. left/right/up/down by 16 pixels) and pick the best match as your new starting point. Then repeat this process several times.
Once you are unable to improve the match, reduce the size of your tweaks and try again.
Alternative algorithms
Depending on your application you may want to consider some alternative methods:
Stereo matching. If your 2 images come from stereo camera then you only really need to search in one direction (and OpenCV provides useful methods to do this)
Known patterns. If you are able to place a known pattern (e.g. a chessboard) in both your images then it becomes a lot easier to register them (and OpenCV provides methods to find and register certain types of pattern)
Feature point matching. A common approach to image registration is to search for distinctive points (e.g. types of corner or more general places of interest) and then try to find matching distinctive points in the two images. For example, OpenCV contains functions to detect SURF features. Google has published a great paper on using this kind of approach in order to remove rolling shutter noise that I recommend reading.

medial axis transform implementation

How do I implement the Medial Axis Transform algorithm to transform the first image into the second?
(source: algorith at www.cs.sunysb.edu)
(source: algorith at www.cs.sunysb.edu)
What library in C++/C# have support for Medial Axis Transform?
There are many implementations of the medial axis transform on the Internet (personally I don't use OpenCV library but I'm sure it has a decent implementation). However, you could easily implement it yourself.
In order to perform medial axis transform, we need to define only one term: simple point.
A point (P) is simple point iff removing P doesn't effect the number of connected components of either the foreground or the background. So, you have to decide the connectivity (4 or 8) for the background and for the foreground - in order to work pick different one for both (if you are interested why, look up Jordan property on google).
Medial transform axis could be implemented by sequentally deleting simple points. You get the final skeleton if there are no more simple points. You get the curved skeleton (I don't know the english name for it which is rare - please correct me) if you only have endpoints OR non-simple points. You provided examples of the latter in your question.
Finding simple points could be easily implemented with morphological operators or a look-up table. Hint: a point is simple point iff the number of connected components in the background is 1 and the number of connected components in the foreground is 1 in a 3x3 local window.
There is a medial axis transform available in this C Library:http://www.pinkhq.com/
There are lot other related functionalities.
Check out this Function:http://www.pinkhq.com/medialaxis_8c.html

Advice for classifying symbols/images

I am working on a project that requires classification of characters and symbols (basically OCR that needs to handle single ASCII characters and symbols such as music notation). I am working with vector graphics (Paths and Glyphs in WPF) so the images can be of any resolution and rotation will be negligable. It will need to classify (and probably learn from) fonts and paths not in a training set. Performance is important, though high accuracy takes priority.
I have looked at some examples of image detection using Emgu CV (a .Net wrapper of OpenCV). However examples and tutorials I find seem to deal specifically with image detection and not classification. I don't need to find instances of an image within a larger image, just determine the kind of symbol in an image.
There seems to be a wide range of methods to choose from which might work and I'm not sure where to start. Any advice or useful links would be greatly appreciated.
You should probably look at the paper: Gradient-Based Learning Applied to Document Recognition, although that refers to handwritten letters and digits. You should also read about Shape Context by Belongie and Malik. They keyword you should be looking for is digit/character/shape recognition (not detection, not classification).
If you are using EmguCV, the SURF features example (StopSign detector) would be a good place to start. Another (possibly complementary) approach would be to use the MatchTemplate(..) method.
However examples and tutorials I find
seem to deal specifically with image
detection and not classification. I
don't need to find instances of an
image within a larger image, just
determine the kind of symbol in an
image.
By finding instances of a symbol in image, you are in effect classifying it. Not sure why you think that is not what you need.
Image<Gray, float> imgMatch = imgSource.MatchTemplate(imgTemplate, Emgu.CV.CvEnum.TM_TYPE.CV_TM_CCOEFF_NORMED);
double[] min, max;
Point[] pointMin, pointMax;
imgMatch.MinMax(out min, out max, out pointMin, out pointMax);
//max[0] is the score
if (max[0] >= (double) myThreshold)
{
Rectangle rect = new Rectangle(pointMax[0], new Size(imgTemplate.Width, imgTemplate.Height));
imgSource.Draw(rect, new Bgr(Color.Aquamarine), 1);
}
That max[0] gives the score of the best match.
Put all your images down into some standard resolution (appropriately scaled and centered).
Break the canvas down into n square or rectangular blocks.
For each block, you can measure the number of black pixels or the ratio between black and white in that block and treat that as a feature.
Now that you can represent the image as a vector of features (each feature originating from a different block), you could use a lot of standard classification algorithms to predict what class the image belongs to.
Google 'viola jones' for more elaborate methods of this type.

Resources