Converting a map in Cartesian projection (with spherical coordinate) to healix projection - healpy

I have a map with spherical coordinate system in Cartesian projection. The header file reads CTYPE1 = 'GLON-CAR' and CTYPE2 = 'GLAT-CAR'. I want to convert this map to healpix projection. I understand that I need to convert the map in spherical coordinate to healpix pixels using tool "ang2pix". But I am not able to understand to how to fill the pixel with the values that are available in cartesian projection.

It's easiest to not re-invent the wheel, and to benefit from the efforts of others that came before you. Check out the reproject package and in particular reproject_to_healpix(), it will do everything you need.

Related

Using RGB images and PointCloud, how to generate depth map from the PointClouds? (python)

I am working on fusing Lidar and Camera images in order to perform a classification object algorithm using CNN.
I want to use the KITTI Dataset which provide synchronized lidar and rgb image data. Lidar are 3D-scanners, so the output is a 3D Point Cloud.
I want to use depth information from point cloud as a channel for the CNN. But I have never work with point cloud so I am asking for some help. Is projecting the point cloud into the camera image plane (using projection matrix provide by Kitti) will give me the depth map that I want? Is Python libray pcl useful or I should move to c++ libraries?
If you have any suggestions, thanks you in advance
I'm not sure what projection matrix provide by Kitti includes, so the answer is it depends. If this projection matrix only contains a transformation matrix, you cannot generate depth map from it. The 2D image has distortion that comes from the 2D camera and the point cloud usually doesn't have distortion, so you cannot "precisely" map point cloud to rgb image without intrinsic and extrinsic parameters.
PCL is not required to do this.
Depth map essentially is mapping depth value to rgb image. You can treat each point in point cloud(each laser of lider) as a pixel of the rgb image. Therefore, I think all you need to do is finding which point in point cloud corresponding to the first pixel(top left corner) of the rgb image. Then read the depth value from point cloud based on rgb image resolution.
You have nothing to do with camera. This is all about point cloud data. Lets say you have 10 million of points and each point has x,y,z in meters. If the data is not in meters first convert it. Then you need the position of the lidar. When you subtract position of car from all the points one by one, you will take the position of lidar to the (0,0,0) point, then you can just print the point on a white image. The rest is simple math, there may be many ways to do it. First that comes to my mind: think rgb as binary numbers. Lets say 1cm is scaled to change in 1 blue, 256cm change equals to change in 1 green and 256x256 which is 65536 cm change equals change in 1 red. We know that cam is (0,0,0) if rgb of the point is 1,0,0 then that means 256x256x1+0x256+0x1=65536 cm away from the camera. This could be done in C++. Also you can use interpolation and closest point algorithms to fill blanks if there are

How to convert a picture into different view by the test position using ray tracing

Now I want train a path loss model, and I have a map picture, and I want to convert this map into different views by the test location(x,y)
I need a conversion algorithm to produce a lot different map views by the test location.Now I can show a example of this(I am sorry this hard to describe)
in the left up is the map with 4 column,in the right bottom is the convert-new-map:
I want to use some "light resource"(the location A) to project onto the building in the map, then some light will be blocked, then we will get the shadow in this test location.
so the shadow from the AP location and test location can present the environment information in this area.
If you have some idea to solve this, please let me know.
Thanks in advance
Cheng Hong
After discussing and googling, I find out that I should using some ray tracing technology for a 2D map.
In my research, I have two point, location A and location P in a map.
And now I want to use ray tracing to convert the map combining the two locations into a new map view.
In this new map view, the location A point is in the center, then some shadow will be added resulting from the building(call it black column) in the origin map. Then this new map is a kind of presentation or describer for the map and two location point. That is what I want to do.
you need to add more specs like the map is an raster image or vector? This has nothing to do with conversion (hence the retag) you just want to render your 2D map as 3D scene or its 2D slice (single horizontal line) this can be done really easily.
raster map
google Wolfenstein ray casting rendering techniques like:
Algorithm for 2D Raytracer
vector map
construct mesh from your map and render by any 3D gfx api like OpenGL. To get started with this approach you need to grasp this:
Understanding 4x4 homogenous transform matrices
see also the sub-links in there ...
To implement the lighting condition you can implement any kind of shading. The easiest is normal shading. For more info see:
Normal shading this may enlight thing or two (for beginners)
Normal/Bump mapping see fragment shader and search the dot
mirrored light see for slightly more complex lighting scheme
simple complete GL+VAO/VBO+GLSL+shaders example in C++
Curved Frosted Glass Shader? for sub surface scattering

Converting EPSG projection bounds to a D3.js map

Given an EPSG projection (say, this Alabama one: [http://spatialreference.org/ref/epsg/26729/][1])
How can you take the given WGS84 projection bounds in such a way that you can use them in a D3.js projection.
For example, how would you know what projection, degree of rotation or bounding box to use to show the map?
This is a fairly complex question. The answer will differ based on the spatial reference (SRS, or coordinate reference system(CRS)) system you are looking at and what your ultimate goal is.
I am using d3.js v4 in this answer
Short Answer:
For example, how would you know what projection, degree of rotation or
bounding box to use to show the map?
There is no hard and fast set of rules that encompasses all projections. Looking at the projection parameters can usually give you enough information to create a projection quickly - assuming the projection comes out of the box in d3.
The best advice I can give on setting the parameters, as when to rotate or when to center, what parallels to use etc, is to zoom way out when refining the projection so you can see what each parameter is doing and where you are looking. Then do your scaling or extent fitting. That and use a geojson validator for your bounding box, like this one.
Lastly, you could always use projected data and drop d3.geoProjection altogether (this question), if all your data is already projected in the same projection, trying to define the projection is a moot point.
Datums
I'll note quickly that the question could be complicated further if you look at differences between datums. For example, the SRS you have referenced used the NAD27 datum. A datum is a mathematical representation of the earth's shape, NAD27 will differ from NAD83 or WGS84, though all are measured in degrees, as the datum represents the three dimensional surface of the earth. If you are mixing data that uses conflicting datums, you could have some precision issues, for example the datum shift between NAD27 and NAD83 is not insignificant depending on your needs (wikipedia screenshot, couldn't link to image):
If shifts in locations due to use of multiple datums is a problem, you'll need more than d3 to convert them into one standard datum. D3 assumes you'll be using WGS84, the datum used by the GPS system. If these shifts are not a problem, then ignore this part of the answer.
The Example Projection
So, let's look at your projection, EPSG:26729:
PROJCS["NAD27 / Alabama East",
GEOGCS["NAD27",
DATUM["North_American_Datum_1927",
SPHEROID["Clarke 1866",6378206.4,294.9786982138982,
AUTHORITY["EPSG","7008"]],
AUTHORITY["EPSG","6267"]],
PRIMEM["Greenwich",0,
AUTHORITY["EPSG","8901"]],
UNIT["degree",0.01745329251994328,
AUTHORITY["EPSG","9122"]],
AUTHORITY["EPSG","4267"]],
UNIT["US survey foot",0.3048006096012192,
AUTHORITY["EPSG","9003"]],
PROJECTION["Transverse_Mercator"],
PARAMETER["latitude_of_origin",30.5],
PARAMETER["central_meridian",-85.83333333333333],
PARAMETER["scale_factor",0.99996],
PARAMETER["false_easting",500000],
PARAMETER["false_northing",0],
AUTHORITY["EPSG","26729"],
AXIS["X",EAST],
AXIS["Y",NORTH]]
This is a pretty standard description of a projection. Each type of projection will have parameters that are specific to it, so these won't always be the same.
The most important parts of this description are:
NAD27 / Alabama East Projection name, not needed but a good reference as it's a little easier to remember than an EPSG number, and references/tools may only use a common name instead of an EPSG number.
PROJECTION["Transverse_Mercator"] The type of projection we are dealing with. This defines how the 3d coordinates representing points on the surface of the earth are translated to 2d coordinates on a cartesian plane. If you see a projection here that is not listed on the d3 list of supported projections (v3 - v4), then you have a bit of work to do in defining a custom projection. But, generally, you will find a projection that matches this. The type of projection changes whether a map is rotated or centered on each axis.
PARAMETER["latitude_of_origin",30.5],
PARAMETER["central_meridian",-85.83333333333333],
These two parameters set the center of the projection. For a transverse Mercator, only the central meridian is important. See this demo of the effect of choosing a central meridian on a transverse Mercator.
The latitude of origin is chiefly used to set the a reference point for the northnigs. The central meridian does this as well for the eastings, but as noted above, sets the central meridian in which distortion is minimized from pole to pole (it is equivalent to the equator on a regular Mercator). If you really need to have proper northings and eastings so that you can compare x,y locations from a paper map and a web map sharing the same projection, d3 is probably not the best vehicle for this. If you don't care about measuring the coordinates in Cartesian coordinate space, these parameters do not matter: D3 is not replicating the coordinate system of the projection (measured in feet as false eastings/northings) but is replicating the same shape in SVG coordinate space.
So based on the relevant parameters in the projection description, a d3.geoProjection centered on the origin of this projection would look like:
d3.geoTransverseMercator()
.rotate([85.8333,0])
.center([0,30.5])
Why did I rotate roughly 86 degrees? This is how a transverse Mercator is built. In the demo of a transverse Mercator, the map is rotated along the x axis. Centering on the x axis will simply pan the map left and right and not change the nature of the projection. In the demo it is clear the projection is undergoing a change fundamentally different than panning, this is the rotation being applied. The rotation I used is negative as I turn the earth under the projection. So this projection is centered at -85.833 degrees or 85.8333 degrees West.
Since on a Transverse Mercator, distortion is consistent along a meridian, we can pan up down and not need to rotate. This is why I use center on the y axis (in this case and in others, you could also rotate on the y axis, with a negative y, as this will spin the cylindrical projection underneath the map, giving the same result as panning).
If we are zoomed out a fair bit, this is what the projection looks like:
It may look pretty distorted, but it is only intended to show the area in and near Alabama. Zooming in it starts to look a lot more normal:
The next question is naturally: What about scale? Well this will differ based on the size of your viewport and the area you want to show. And, your projection does not specify any bounds. I'll touch on bounds at the end of the answer, if you want to show the extent of a map projection. Even if the projection has bounds, they may very well not align with the area you want to show (which is usually a subset of the overall projection bounds).
What about centering elsewhere? Say you want to show only a town that doesn't happen to lie at the center of the projection? Well, we can use center. Because we rotated the earth on the x axis, any centering is relative to the central meridian. Centering to [1,30.5], will center the map 1 degree East of the central meridian (85.8333 degrees West). So the x component will be relative to the rotation, the y component will be in relation to the equator - its latitude).
If adhering to the projection is important, this odd centering behavior is needed, if not, it might be easier to simply modify the x rotation so that you have a projection that looks like:
d3.geoTransverseMercator()
.center([0,y])
.rotate([-x,0])
...
This will be customizing the transverse Mercator to be optimized for your specific area, but comes at the cost of departing from your starting projection.
Different Projections Types
Different projections may have different parameters. For example, conical projections can have one (tangent) or two (secant) lines, these represent the points where the projection intersects the earth (and thus where distortion is minimized). These projections (such as an Albers or Lambert Conformal) use a similar method for centering (rotate -x, center y) but have the additional parameter to specify the parallels that represent the tangent or secant lines:
d3.geoAlbers()
.rotate([-x,0])
.center([0,y])
.parallels([a,b])
See this answer on how to rotate/center an Albers (which is essentially the same for all conical projections that come to mind at the moment).
A planar/azimuthal projeciton (which I haven't checked) is likely to be centered only. But, each map projection may have a slightly different method in 'centering' it (usually a combination of .rotate and .center).
There are lots of examples and SO questions on how to set different projection types/families, and these should help for most specific projections.
Bounding Boxes
However, you may have a projection that specifies a bounds. Or more likely, an image with a bounds and a projection. In this event, you will need to specify those bounds. This is most easily done with a geojson feature using the .fitExtent method of a d3.geoProjection():
projection.fitExtent(extent, object):
Sets the projection’s scale and translate to fit the specified GeoJSON object in the center of the given extent. The extent is specified as an array [[x₀, y₀], [x₁, y₁]], where x₀ is the left side of the bounding box, y₀ is the top, x₁ is the right and y₁ is the bottom. Returns the projection.
(see also this question/answer)
I'll use the example in the question here to demonstrate the use of a bounding box to help define a projection. The goal will be to project the map below with the following knowledge: its projection and its bounding box (I had it handy, and couldn't find a good example with a defined bounding box quick enough):
Before we get to the bounding box coordinates however, let's take a look at the projection. In this case it is something like:
PROJCS["ETRS89 / Austria Lambert",
GEOGCS["ETRS89",
DATUM["European_Terrestrial_Reference_System_1989",
SPHEROID["GRS 1980",6378137,298.257222101,
AUTHORITY["EPSG","7019"]],
AUTHORITY["EPSG","6258"]],
PRIMEM["Greenwich",0,
AUTHORITY["EPSG","8901"]],
UNIT["degree",0.01745329251994328,
AUTHORITY["EPSG","9122"]],
AUTHORITY["EPSG","4258"]],
UNIT["metre",1,
AUTHORITY["EPSG","9001"]],
PROJECTION["Lambert_Conformal_Conic_2SP"],
PARAMETER["standard_parallel_1",49],
PARAMETER["standard_parallel_2",46],
PARAMETER["latitude_of_origin",47.5],
PARAMETER["central_meridian",13.33333333333333],
PARAMETER["false_easting",400000],
PARAMETER["false_northing",400000],
AUTHORITY["EPSG","3416"],
AXIS["Y",EAST],
AXIS["X",NORTH]]
As we will be letting d3 choose the scale and center point based on the bounding box, we only care about a few parameters:
PARAMETER["standard_parallel_1",49],
PARAMETER["standard_parallel_2",46],
These are the two secant lines, where the map projection intercepts the surface of the earth.
PARAMETER["central_meridian",13.33333333333333],
This is the central meridian, the number we will use for rotating the projection along the x axis (as one will do for all conical projections that come to mind).
And most importantly:
PROJECTION["Lambert_Conformal_Conic_2SP"],
This line gives us our projection family/type.
Altogether this gives us something like:
d3.geoConicConformal()
.rotate([-13.33333,0]
.parallels([46,49])
Now, the bounding box, which is defined by these limits:
East: 17.2 degrees
West: 9.3 degrees
North: 49.2 degrees
South: 46.0 degrees
The .fitExtent (and .fitSize) methods take a geojson object and translate and scale the projection appropriately. I'll use .fitSize here as it skips margins around the bounds (fitExtent allows provision of margins, that's the only difference). So we need to create a geojson object with those bounds:
var bbox = {
"type": "Polygon",
"coordinates": [
[
[9.3, 49.2], [17.2, 49.2], [17.2, 46], [9.3, 46], [9.3,49.2]
]
]
}
Remember to use the right hand rule, and to have your end point the same as your start point (endless grief otherwise).
Now all we have to do is call this method and we'll have our projection. Since I'm using an image to validate my projection parameters, I know the aspect ratio I want. If you don't know the aspect ratio, you may have some excess width or height. This gives me something like:
var projection = d3.geoConicConformal()
.parallels([46,49])
.rotate([-13.333,0])
.fitSize([width,height],bbox)
And a happy looking final product like (keeping in mind a heavily downsampled world topojson):

Detecting the projection of an SVG map

I have an SVG map (called "external map" hereafter) representing a portion of the globe ; along with a map of the globe ("background map") in its entirety. I would like to be able to detect what projection the external map uses (my final aim being to superimpose the two maps). For the moment I only consider the Mercator, equirectangular and orthographic projections.
I developed a code that shows the two maps (external on the left, background on the right) and allows the user to drag/zoom on the background map and choose one of these projections (link). If I manually fiddle with these properties I conclude that the external map was probably created using the Mercator projection ; but how could I have found this result programmatically ? I thought about the following algorithm :
Ask the user to choose, say, 5 points that he would geolocalize on both maps.
Calculate the (pixel-based) distances between each of the 5 points on the external map.
For each projection :
Center and scale the background map using the coordinates of the 5 points that the user located on the background map.
Calculate the pixel-based distances between the 5 points on the background map. Compare them with the distances calculated on step 2. The projection with the smallest distance differences is then considered to be the one that was used to create the external map.
This algorithm raises several questions :
On step 3, how can I calculate the center of the map using the located points ? The projections are often distorted, so using proportionality to find it doesn't seem right.
For the same reasons, I don't know how I could determine the scale (zoom) to apply on the background map.
This algorithm seems quite natural but the issues I raise make it look impossible to implement. Are there other (better) algorithms that could help me determine this projection ? If I can find it manually there must be a way to find it programmatically !
I use d3 for the map rendering if it helps.

Convert 2D planes to 3D model

We have a multiple 2D planar image of an object scanned from a fan-beam perspective. An example is in Fig 5 below. We have multiple grainy dotted planes to scan the whole image.
The issue with these images is that they cannot be directly mapped to a 3D plane due to the fan beam deformation.
Is there correction algorithms/methods that can be recommended so that these planes can be correctly mapped to 3D plane and an object can be reconstructed properly?
Depending on how you store your data, there might be various approaches. Guessing that you store the data as points ("grainy dotted planes"), you can do an interpolation of the corresponding points in the consecutive planes and thereby get the scan of the entire object. It does require the points to be in the same frame, so you might have to do some kind of transformation to find the parameters of each plane to a global framework.
Another procedure might be use of a least square fitting of each plane which can then be used to map together the object. You might find some helpful approaches of scanning 3d objects using 2d methods. Hope this helps.

Resources