Convert H3Index to IJK Coordinate? - h3

Is it possible to convert
from:
"H3Index Representation"
https://h3geo.org/docs/core-library/h3Indexing
to:
"IJK Coordinates"
https://h3geo.org/docs/core-library/coordsystems
?
Assuming there is an API for it:
Which API would have to be used ? And is this API available in the DLL/runtime library ?
If there is no API for it, is there another way ?
(Maybe vertex API can be used for this ?)
The current version 3.x DLL seems to be missing some APIs also marked as "internal functions" in the include headers.

We don't offer an external API for IJK coords - these coordinates would be difficult to work with, as they are only valid within a single icosahedron face, and crossing an icosahedron edge would result in a different coordinate system.
We do offer a relative IJ coordinate system through the API:
V3: experimentalH3ToLocalIj
V4: cellToLocalIj
These functions require an "origin" index that serves to anchor the coordinate system on a single icosahedron face, and they are capable of "unfolding" the icosahedron to cross over to one adjacent face, so they're generally going to be more reliable than direct access to the IJK coords. You can convert IJ to IJK coords fairly easily. Just note that different parts of the world will have different coordinate systems, hence the "origin" requirement - this is meant for local area use, not global use.

The main issue I was facing is missing API functions in the H3.DLL. To build the H3.DLL with all API implementations the following commands should be used:
Step 2: Configure for DLL release:
cmake ..\h3V4 -DBUILD_TESTING=OFF -DBUILD_SHARED_LIBS=ON -DCMAKE_WINDOWS_EXPORT_ALL_SYMBOLS=ON
Step 3:
cmake --build . --config Release
Once this is done the following API will become available:
_h3ToFaceIjk
Which can be used to convert an H3Index to a FaceIJK Index.
Where there will be 4 coordinates returned into a structure via a pointer.
The first coordinate is some kind of face number. Possible related to this icohedron.
The second,third,fourth are the IJK, they go up to a certain range.
The total coordinate system is indeed a bit messy when using this API.
It would indeed be usefull, I think, to base the IJK coordinates on some selected hexagon, so that this hexagon can functions as location 0,0,0,0 or just 0,0,0.
And then re-map all other hexagons around it. It might be even more usefull, if the coordinates can be negative and positive as well.
I will try and play with other conversion routines in the library, which might do this. I am not sure if it will be perfect.
So far the documentation seems to indicate that coordinates in FaceIJK system are also positive...
I could also try and come up with my own algorithm to try and attach desireable coordinates to the hexagons.
Basically
-1 -1
-1 +1
+1 +1
Would be desirable.
Pretend the above drawing is a hexagon or triangulization inside the hexagon... and those numbers are on the points/verteces of the hexagon.

Related

Rotating about a vector, Blueprints Unreal Engine 4

I have a problem where I want to rotate an Actor about an arbitrary vector, I wonder if there's a standard way of using Blueprints to achieve that, in case I have the vector's coordinates. I didn't find anything useful online.
One more smaller issue I encountered, regarding the extraction of that vector:
Is there a way to extract world coordinates of some key-points of an Actor using Blueprints or the UE4 interface?
For example, given a door frame which is rotated 5 degrees around the X axis, can I extract the world coordinates of one of its corners using simple tools such as Blueprints or the interface?
Assuming you are rotating an actor (in this case, by your example that actor should be the door) you can take multiple approaches, but I'll list only three:
Option 1
First, define a Socket in the door mesh, in the position you want to obtain. Then, get its current position with a GetSocketLocation node.
Option 2
If your door is intended to be a blueprint and you need to get a specific point, you can define a Scene Component in that Blueprint in the specific position you want it and then create a function that returns the World Location of that component. This is particularly useful if that position can change in time.
Option 3
Simply have a Vector parameter defining the offset of your given point in the actor's local space. You'll still need a function to translate that offset from Local to World or World to Local, highly depending on your approach.
With your given context, this is a way to interpret your situation.

How can I convert tango point cloud data to worldspace point cloud?

I am modifying the tango example point cloud app.
I have exported point clouds along with its current pose data.
The point cloud coordinates we get are relative to current pose.
I wanted to know how can I convert the point cloud of different poses to worldspace coordinates (with respect to origin which should be first pose in this case)?
1 - Applying the pose transform to the pose points will give you world space coordinates
2 - If you're treating the very first pose as special (which I really wouldn't advise) then you are expressing your coordinates in the first poses coordinate system, not world coordinates. If you really want to do this (please don't, simply computing inverse transforms is far better) then I'd say you want to invert the first transform and keep it handy, and then multiply subsequent transforms by the inverse of the first transform (to cancel out the 'contribution' of the first) and then transform the points with the result.

in the galsim lib, what does the applyShear do?

does it do the same image distortion as applyTransformation?
is there still this function applyTransformation?
and about the applyShear function,
it only apply shear, no other lensing effect, i.e. no magnification, or no other higher order effects, right?
All the written documentation, examples and source code can be found on GitHub, at the following location. https://github.com/GalSim-developers/GalSim
I suggest looking there first, and if there aren't any docs that answer your question directly, you could always study the source code for the applyShear functionality you are describing.
Good luck
In version 1.1, the methods applyShear and applyTransformation were deprecated. The preferred methods to use are now shear and transform.
The shear method is typically used as sheared_obj = obj.shear(g1=g1, g2=g2) where g1, g2 are the components of the reduced shear to be applied. You can also give e1,e2 (distortions rather than shear), or g, beta or e, beta (giving the magnitude and position angle), among other possibilities. See the docs for the Shear class for more information about ways to specify a shear in GalSim.
However you specify the shear, the shear method will shear the given surface brightness profile by that amount in a manner that preserves total flux.
The transform method is somewhat more general in that you can transform by any arbitrary 2x2 coordinate transformation matrix. Specifically, you specify an arbitrary Jacobian: dudx, dudy, dvdx, dvdy, where (x,y) are the original coordinates and (u,v) are the transformed coordinates. With this method, you could apply a transformation that is equivalent to a shear, but you would need to manually calculate the correct terms in the Jacobian.
The other difference is that transform does not necessarily preserve flux. The flux is only preserved if the Jacobian has unit determinant. So depending on your use case, you may want to rescale the flux by the determinant when you are done.
My guess is that you will most often want to use shear rather than transform. Here is a sample use case (taken from demo5.py in the GalSim examples directory):
gal = galsim.Exponential(flux=1., half_light_radius=gal_re)
[...]
for [...]:
# Now in a loop to draw many versions of this galaxy with different shears, etc.
# Make a new copy of the galaxy with an applied e1/e2-type distortion
# by specifying the ellipticity and a real-space position angle
this_gal = gal.shear(e=ellip, beta=beta)
# Apply the gravitational reduced shear by specifying g1/g2
this_gal = this_gal.shear(g1=gal_g1, g2=gal_g2)
[...]
Hope this helps.
Note: The links above are current as of December, 2014. If they go stale, try navigating from the top level of the Doxygen documentation, which hopefully will still work.
I would just like to add to the response above from ne1410s (I don't have the reputation points to comment on his or her post).
In addition to that excellent information, there are thorough docstrings for most of the standard functionality, which you can check out using the python help() routine, i.e.
import galsim
help(galsim.GSObject.applyShear)
In the case of applyShear(), this routine is deprecated in the latest version of GalSim, but the docstring still exists and points the user to the new functionality. For older versions of GalSim, there is a complete docstring that explains how applyShear() works.
To summarize, going to the GalSim repository, you can get help in the following ways:
Go through the demos in examples/ (they are extensively commented).
Go through the quick reference guide in doc/
Go to the complete documentation including all docstrings, at http://galsim-developers.github.io/GalSim/
And then there are the docstrings for individual routines, which are accessible via help(). It should not, in general, be necessary to read the source code to learn how routines work.

Tracking user defined points with OpenCV

I'm working on a project where I need to track two points in an image. So far, the best way I have of identifying these points is to get the user to click on them when the program is first run. I'm using the Lucas-Kanade Pyramid method built into OpenCV (documented here, but as is to be expected, this doesn't work too well. Is there a better alternative algorithm for tracking points in OpenCV, or alternatively some other way of verifying the points I already have?
I'm currently considering using GoodFeaturesToTrack, and getting the distance from each point to the one that I want to track, and maybe some sort of vector pointing out the relationship between the two points, and using this information to determine my new point.
I'm looking for suggestions of ways to go about this, not necessarily code samples.
Thanks
EDIT: I'm tracking small movements, if that helps
If you look for a solution that is implemented in opencv the pyramidal Lucas Kanade (PLK) method is quit good, else I would prefer a Particle Filter based tracker.
To improve your tracking performance with the PLK be sure that you have set up the parameters correctly. E.g. for large motion you need a level at ca. 3 or 4. The window should not be to small ( I prefer 17x17 to 27x27). Also keep in mind that the methods needs textured areas to be able to track the points. That means corner like image content (aperture problem).
I would propose to seed a set of points (ps) in a grid around the points (P) you want to track. And than use a foreward - backward threshold to reject falsly tracked points. The motion of your points (P) will be computed by the mean motion of the particular residual point sets (ps).
The foreward backward confidence is computes by estimating the motion from frame 1 to frame 2. (ptList1 -> ptList2). And that from frame 2 to frame 1 with the points of ptList2 (ptList2 -> ptListRef). Motion vectors will be rejected if (|| ptRef - pt1 || > fb_threshold).

Finding cross on the image

I have set of binary images, on which i need to find the cross (examples attached). I use findcontours to extract borders from the binary image. But i can't understand how can i determine is this shape (border) cross or not? Maybe opencv has some built-in methods, which could help to solve this problem. I thought to solve this problem using Machine learning, but i think there is a simpler way to do this. Thanks!
Viola-Jones object detection could be a good start. Though the main usage of the algorithm (AFAIK) is face detection, it was actually designed for any object detection, such as your cross.
The algorithm is Machine-Learning based algorithm (so, you will need a set of classified "crosses" and a set of classified "not crosses"), and you will need to identify the significant "features" (patterns) that will help the algorithm recognize crosses.
The algorithm is implemented in OpenCV as cvHaarDetectObjects()
From the original image, lets say you've extracted sets of polygons that could potentially be your cross. Assuming that all of the cross is visible, to the extent that all edges can be distinguished as having a length, you could try the following.
Reject all polygons that did not have exactly 12 vertices required to
form your polygon.
Re-order the vertices such that the shortest edge length is first.
Create a best fit perspective transformation that maps your vertices onto a cross of uniform size
Examine the residuals generated by using this transformation to project your cross back onto the uniform cross, where the residual for any given point is the distance between the projected point and the corresponding uniform point.
If all the residuals are within your defined tolerance, you've found a cross.
Note that this works primarily due to the simplicity of the geometric shape you're searching for. Your contours will also need to have noise removed for this to work, e.g. each line within the cross needs to be converted to a single simple line.
Depending on your requirements, you could try some local feature detector like SIFT or SURF. Check OpenSURF which is an interesting implementation of the latter.
after some days of struggle, i came to a conclusion that the only robust way here is to use SVM + HOG. That's all.
You could erode each blob and analyze their number of pixels is going down. No mater the rotation scaling of the crosses they should always go down with the same ratio, excepted when you're closing down on the remaining center. Again, when the blob is small enough you should expect it to be in the center of the original blob. You won't need any machine learning algorithm or training data to resolve this.

Resources