Is it possible to use the H3 grid system for celestial tilling? I want to use the (alpha, delta)=(right ascension, declinaison) coordinate system (equatorial coordinate system) to assign a point/galaxy to a tile and then collect all the points (galaxies) of a particular tile, find the neighboring tiles, and get an unique indexing of the tiles.
This should be fine, as long as you have function to transform equatorial coordinates into lat/long coordinates. The inputs for H3 are expected to be lat/long points, specified as either degrees (for most of the bindings, e.g. h3-js and h3-py), or radians (for the core C library). Assuming you have a function to translate the (ascension, declination) tuple to a (lat, long) tuple, then H3 should be entirely adequate as a grid system for any spherical coordinates.
The only Earth-centric aspects of H3 are:
The initial orientation of the icosahedron the grid is based on, which is optimized to place the vertices in Earth's oceans
Any functions like hexArea or edgeLength which output units of kilometers or meters based on the Earth's radius.
Other than these, the grid system should work for any spherical context.
Related
Say I have a set of points from a sensor which are all within a margin of error on a 2D plane somewhere in the 3D space. How would I go on about transforming the coordinates of the points onto a 2d coordinate system, so that for example the convex hulls of the points or the distances between the points don't change?
Assuming you know the equation of the plane (otherwise you can fit it by least-square or other), construct a new coordinate frame as follows:
get the normal vector,
form the cross product with an arbitrary vector having a different direction;
form the cross product of the normal and the second vector,
normalize all three and name the new axis z, x, y.
This creates an orthonormal basis to which you will transform the points. This corresponds to a rigid transform, that preserves all distances. You can drop the z to get the orthogonal projections of the points to the plane.
I have a 2D shape (a circle) that I want to extrude along a 3D curve to create a 3D tube mesh.
Currently the way I generate cross-sections along the curve (which form the basis of the resulting mesh) is to take every control point along the curve, create a 3D transform matrix for it, then multiply the 2D points of my circle by those curve-point matrices to determine their location in 3D space along the curve.
To create the matrix (from 3 vectors), I use the tangent on the curve as the up vector, world-up ([0,1,0]) as the forward vector, and the cross product of the up/forward vectors as the right vector. All three vectors are also orthogonalized during the process to create the final matrix.
The problem comes when my curve tangent is identical to the world-up axis. Ie, my tangent vector is [0,1,0] and the world-up is [0,1,0]....since the cross product of two parallel vectors is not explicit....the resulting extruded mesh has artifacts along those areas of the curve (pinching, twisting, etc).
I thought a potential solution would be to use the dot product of the curve tangent and the world-up as an interpolation value to shift my forward vector from world-up to world-right...in other words, as a curve tangent approaches [0,1,0], my forward vector approaches [1,0,0]...but that results in unwanted twisting along the final mesh as well.
How can I extrude my shape along a curve in a consistent manner that has no flipping/artifacts/twisting? I know it's possible since various off-the-shelf 3D applications can do it...I'm just not sure how.
One way I would approach this is to consider my tangent vector to the 3D curve as actually being a normal vector of the plane I am interested into.
Let's say, the tangent vector is
All you need now is two other vectors that are othoghonal to it, so let's.
Let's construct v like so:
(rotating the coordinates). Because v is the result of the cross product of u and something else, you know that v is orthogonal to u.
(This method will not work if u have equal x,y,z coordinates, in that case, construct the other vector by adding random numbers to at least two variables, rince&repeat).
Then you can simply construct w like before:
normalize and go.
In my program (using MATLAB), I specified(through dragging) the pedestrian lane as my Region Of Interest (ROI) with the coordinates [7, 178, 620, 190] (in xmin, ymin, width, and height respectively) using the getrect, roipoly and insertshape function. Refer to the image below.
The video from where this snapshot is taken is in 640x480 pixels resolution (480p)
Defining a real world space as my ROI by mouse dragging is barbaric. That's why the ROI coordinates must be derived mathematically.
What I'm going at is using real-world measurements from the video capturing site and use the Pythagorean Theorem from where the camera is positioned:
How do I obtain the equivalent pixel coordinates and parameters using the real-world measurements?
I'll try to split your question into 2 smaller questions.
A) How do I obtain the equivalent pixel coordinates of an interesting
point? (pratical question)
Your program shoudl be able to retrieve/reconnaise a feature/marker that you positioned in the "real-world" interesting point. The output is a coordinate in pixel. This can be done quite easily (think about QR-codes, for example)
B) What is the analytical relationship between 1 point in 3D space and
its pixel coordinate in the image? (theoretical question)
This is the projection equation based on the pinhole camera model. X,Y,Z 3D coordinates are related with x,y pixel coordinates
Cool, but some detail have to be explained (and there will be any "automatic short formula")
s represent the scale factor. A single pixel in an image could be the projection of infinite different point, due to perspective. In your photo, a pixel containing a piece of a car (when the car is present) will be the same pixel that contain a piece of street under the car (when the car is passed).
So there is not an univocal relationship starting from pixels coordinates
The matrix on the left involves the camera parameters (focal length, etc.) which are called intrinsic parameters. They have to be known to build the relationship between 3D coordinates and pixel coordinates
The matrix on the right seems to be trivial, is the combination of an identity matrix which represents rotation and a column array of zeros which represents translation. Something like T = [R|t].
Which rotation, which translation? You have to consider that every set of coordinates is implicitly expressed in its own reference system. So you have to determine the relationship between the reference system of your measurement and the camera reference system: not only to retrieve position of the camera in your 3D space with euclidean geometry, but also orientation of the camera (angles).
I have an SVG image (a site plan) with width w, height h, which I would like to view on a map background.
To superimpose it at the right place on the map, I would like to stretch it to four arbitrary corners: x1,y1, x2,y2, x3,y3, x4,y4.
I figure I might be able to do this with a combination of SVG transforms (scale, rotate, skew, translate) but my maths is nowhere near up to the job. Any clues?
Rotation and translation can describe Euclidean (i.e. length-preserving) transformations. With isotropic scaling added in you obtain similarity transformations, and with anisotropic scaling and skewing you even get affine transformations. So that is the kind of transformation your operations can express.
But an affine transformation is already uniquely defined by three points and their images. Which means the fourth corner will end up in a location determined by the other three. To arbitrarily position four corners, you need a projective transformation.
See also this post about hwo to compute a projective transformation, how to apply it, and how to use it in JavaScript, if the browser supports projective transformations in 3D.
I am trying to implement the method of Dalal and Triggs. I could implement the first stage compute gradients on an image, and I could create the code who walk across the image in cells, but I don't understand the logic behind this stage.
I know is necessary identify first between a signed (0-360 degrees) or unsigned (0-180 degrees) gradients.
I know I must create a data structure to store each cell histogram, whit n bins. I know what is a histogram, hence I understand I must visit each pixel, but I I don't fully understand about the method for classify each pixel, get the gradient orientation of this pixel and build the histogram with this data.
In short HOG is nothing but a dense representation of gradient orientations weighted by their strengths over a overlapped local neighbourhoods.
You asked what is the significance of finding each pixel gradient orientation. In an image the gradient orientation at each pixel indicates the direction of the boundary(edge between two textures) of the object at that location with respect to X and Y axis. So if you group the orientations of a patch or block or part of an object it represents the distribution of edge directions of object at that region in a very strong way or unique way... Now let us take a simple example, a circle if you plot the gradient orientations of a circle as a histogram you will get a straight line (Don't imagine HOG just a simple plot of gradient orientations) because the orientations of edges of circle ranges from 0 degrees to 360 degrees if u sampled at 360 consecutive locations, For a different object it is different, HOG also do the same thing but in a more sophisticated manner by dividing image into overlapping blocks and dividing each block into cells and making the histogram weighted by the strengths of the local gradients...
Hope it is useful ...