Detecting the projection of an SVG map - algorithm

I have an SVG map (called "external map" hereafter) representing a portion of the globe ; along with a map of the globe ("background map") in its entirety. I would like to be able to detect what projection the external map uses (my final aim being to superimpose the two maps). For the moment I only consider the Mercator, equirectangular and orthographic projections.
I developed a code that shows the two maps (external on the left, background on the right) and allows the user to drag/zoom on the background map and choose one of these projections (link). If I manually fiddle with these properties I conclude that the external map was probably created using the Mercator projection ; but how could I have found this result programmatically ? I thought about the following algorithm :
Ask the user to choose, say, 5 points that he would geolocalize on both maps.
Calculate the (pixel-based) distances between each of the 5 points on the external map.
For each projection :
Center and scale the background map using the coordinates of the 5 points that the user located on the background map.
Calculate the pixel-based distances between the 5 points on the background map. Compare them with the distances calculated on step 2. The projection with the smallest distance differences is then considered to be the one that was used to create the external map.
This algorithm raises several questions :
On step 3, how can I calculate the center of the map using the located points ? The projections are often distorted, so using proportionality to find it doesn't seem right.
For the same reasons, I don't know how I could determine the scale (zoom) to apply on the background map.
This algorithm seems quite natural but the issues I raise make it look impossible to implement. Are there other (better) algorithms that could help me determine this projection ? If I can find it manually there must be a way to find it programmatically !
I use d3 for the map rendering if it helps.

Related

How to convert a picture into different view by the test position using ray tracing

Now I want train a path loss model, and I have a map picture, and I want to convert this map into different views by the test location(x,y)
I need a conversion algorithm to produce a lot different map views by the test location.Now I can show a example of this(I am sorry this hard to describe)
in the left up is the map with 4 column,in the right bottom is the convert-new-map:
I want to use some "light resource"(the location A) to project onto the building in the map, then some light will be blocked, then we will get the shadow in this test location.
so the shadow from the AP location and test location can present the environment information in this area.
If you have some idea to solve this, please let me know.
Thanks in advance
Cheng Hong
After discussing and googling, I find out that I should using some ray tracing technology for a 2D map.
In my research, I have two point, location A and location P in a map.
And now I want to use ray tracing to convert the map combining the two locations into a new map view.
In this new map view, the location A point is in the center, then some shadow will be added resulting from the building(call it black column) in the origin map. Then this new map is a kind of presentation or describer for the map and two location point. That is what I want to do.
you need to add more specs like the map is an raster image or vector? This has nothing to do with conversion (hence the retag) you just want to render your 2D map as 3D scene or its 2D slice (single horizontal line) this can be done really easily.
raster map
google Wolfenstein ray casting rendering techniques like:
Algorithm for 2D Raytracer
vector map
construct mesh from your map and render by any 3D gfx api like OpenGL. To get started with this approach you need to grasp this:
Understanding 4x4 homogenous transform matrices
see also the sub-links in there ...
To implement the lighting condition you can implement any kind of shading. The easiest is normal shading. For more info see:
Normal shading this may enlight thing or two (for beginners)
Normal/Bump mapping see fragment shader and search the dot
mirrored light see for slightly more complex lighting scheme
simple complete GL+VAO/VBO+GLSL+shaders example in C++
Curved Frosted Glass Shader? for sub surface scattering

Converting EPSG projection bounds to a D3.js map

Given an EPSG projection (say, this Alabama one: [http://spatialreference.org/ref/epsg/26729/][1])
How can you take the given WGS84 projection bounds in such a way that you can use them in a D3.js projection.
For example, how would you know what projection, degree of rotation or bounding box to use to show the map?
This is a fairly complex question. The answer will differ based on the spatial reference (SRS, or coordinate reference system(CRS)) system you are looking at and what your ultimate goal is.
I am using d3.js v4 in this answer
Short Answer:
For example, how would you know what projection, degree of rotation or
bounding box to use to show the map?
There is no hard and fast set of rules that encompasses all projections. Looking at the projection parameters can usually give you enough information to create a projection quickly - assuming the projection comes out of the box in d3.
The best advice I can give on setting the parameters, as when to rotate or when to center, what parallels to use etc, is to zoom way out when refining the projection so you can see what each parameter is doing and where you are looking. Then do your scaling or extent fitting. That and use a geojson validator for your bounding box, like this one.
Lastly, you could always use projected data and drop d3.geoProjection altogether (this question), if all your data is already projected in the same projection, trying to define the projection is a moot point.
Datums
I'll note quickly that the question could be complicated further if you look at differences between datums. For example, the SRS you have referenced used the NAD27 datum. A datum is a mathematical representation of the earth's shape, NAD27 will differ from NAD83 or WGS84, though all are measured in degrees, as the datum represents the three dimensional surface of the earth. If you are mixing data that uses conflicting datums, you could have some precision issues, for example the datum shift between NAD27 and NAD83 is not insignificant depending on your needs (wikipedia screenshot, couldn't link to image):
If shifts in locations due to use of multiple datums is a problem, you'll need more than d3 to convert them into one standard datum. D3 assumes you'll be using WGS84, the datum used by the GPS system. If these shifts are not a problem, then ignore this part of the answer.
The Example Projection
So, let's look at your projection, EPSG:26729:
PROJCS["NAD27 / Alabama East",
GEOGCS["NAD27",
DATUM["North_American_Datum_1927",
SPHEROID["Clarke 1866",6378206.4,294.9786982138982,
AUTHORITY["EPSG","7008"]],
AUTHORITY["EPSG","6267"]],
PRIMEM["Greenwich",0,
AUTHORITY["EPSG","8901"]],
UNIT["degree",0.01745329251994328,
AUTHORITY["EPSG","9122"]],
AUTHORITY["EPSG","4267"]],
UNIT["US survey foot",0.3048006096012192,
AUTHORITY["EPSG","9003"]],
PROJECTION["Transverse_Mercator"],
PARAMETER["latitude_of_origin",30.5],
PARAMETER["central_meridian",-85.83333333333333],
PARAMETER["scale_factor",0.99996],
PARAMETER["false_easting",500000],
PARAMETER["false_northing",0],
AUTHORITY["EPSG","26729"],
AXIS["X",EAST],
AXIS["Y",NORTH]]
This is a pretty standard description of a projection. Each type of projection will have parameters that are specific to it, so these won't always be the same.
The most important parts of this description are:
NAD27 / Alabama East Projection name, not needed but a good reference as it's a little easier to remember than an EPSG number, and references/tools may only use a common name instead of an EPSG number.
PROJECTION["Transverse_Mercator"] The type of projection we are dealing with. This defines how the 3d coordinates representing points on the surface of the earth are translated to 2d coordinates on a cartesian plane. If you see a projection here that is not listed on the d3 list of supported projections (v3 - v4), then you have a bit of work to do in defining a custom projection. But, generally, you will find a projection that matches this. The type of projection changes whether a map is rotated or centered on each axis.
PARAMETER["latitude_of_origin",30.5],
PARAMETER["central_meridian",-85.83333333333333],
These two parameters set the center of the projection. For a transverse Mercator, only the central meridian is important. See this demo of the effect of choosing a central meridian on a transverse Mercator.
The latitude of origin is chiefly used to set the a reference point for the northnigs. The central meridian does this as well for the eastings, but as noted above, sets the central meridian in which distortion is minimized from pole to pole (it is equivalent to the equator on a regular Mercator). If you really need to have proper northings and eastings so that you can compare x,y locations from a paper map and a web map sharing the same projection, d3 is probably not the best vehicle for this. If you don't care about measuring the coordinates in Cartesian coordinate space, these parameters do not matter: D3 is not replicating the coordinate system of the projection (measured in feet as false eastings/northings) but is replicating the same shape in SVG coordinate space.
So based on the relevant parameters in the projection description, a d3.geoProjection centered on the origin of this projection would look like:
d3.geoTransverseMercator()
.rotate([85.8333,0])
.center([0,30.5])
Why did I rotate roughly 86 degrees? This is how a transverse Mercator is built. In the demo of a transverse Mercator, the map is rotated along the x axis. Centering on the x axis will simply pan the map left and right and not change the nature of the projection. In the demo it is clear the projection is undergoing a change fundamentally different than panning, this is the rotation being applied. The rotation I used is negative as I turn the earth under the projection. So this projection is centered at -85.833 degrees or 85.8333 degrees West.
Since on a Transverse Mercator, distortion is consistent along a meridian, we can pan up down and not need to rotate. This is why I use center on the y axis (in this case and in others, you could also rotate on the y axis, with a negative y, as this will spin the cylindrical projection underneath the map, giving the same result as panning).
If we are zoomed out a fair bit, this is what the projection looks like:
It may look pretty distorted, but it is only intended to show the area in and near Alabama. Zooming in it starts to look a lot more normal:
The next question is naturally: What about scale? Well this will differ based on the size of your viewport and the area you want to show. And, your projection does not specify any bounds. I'll touch on bounds at the end of the answer, if you want to show the extent of a map projection. Even if the projection has bounds, they may very well not align with the area you want to show (which is usually a subset of the overall projection bounds).
What about centering elsewhere? Say you want to show only a town that doesn't happen to lie at the center of the projection? Well, we can use center. Because we rotated the earth on the x axis, any centering is relative to the central meridian. Centering to [1,30.5], will center the map 1 degree East of the central meridian (85.8333 degrees West). So the x component will be relative to the rotation, the y component will be in relation to the equator - its latitude).
If adhering to the projection is important, this odd centering behavior is needed, if not, it might be easier to simply modify the x rotation so that you have a projection that looks like:
d3.geoTransverseMercator()
.center([0,y])
.rotate([-x,0])
...
This will be customizing the transverse Mercator to be optimized for your specific area, but comes at the cost of departing from your starting projection.
Different Projections Types
Different projections may have different parameters. For example, conical projections can have one (tangent) or two (secant) lines, these represent the points where the projection intersects the earth (and thus where distortion is minimized). These projections (such as an Albers or Lambert Conformal) use a similar method for centering (rotate -x, center y) but have the additional parameter to specify the parallels that represent the tangent or secant lines:
d3.geoAlbers()
.rotate([-x,0])
.center([0,y])
.parallels([a,b])
See this answer on how to rotate/center an Albers (which is essentially the same for all conical projections that come to mind at the moment).
A planar/azimuthal projeciton (which I haven't checked) is likely to be centered only. But, each map projection may have a slightly different method in 'centering' it (usually a combination of .rotate and .center).
There are lots of examples and SO questions on how to set different projection types/families, and these should help for most specific projections.
Bounding Boxes
However, you may have a projection that specifies a bounds. Or more likely, an image with a bounds and a projection. In this event, you will need to specify those bounds. This is most easily done with a geojson feature using the .fitExtent method of a d3.geoProjection():
projection.fitExtent(extent, object):
Sets the projection’s scale and translate to fit the specified GeoJSON object in the center of the given extent. The extent is specified as an array [[x₀, y₀], [x₁, y₁]], where x₀ is the left side of the bounding box, y₀ is the top, x₁ is the right and y₁ is the bottom. Returns the projection.
(see also this question/answer)
I'll use the example in the question here to demonstrate the use of a bounding box to help define a projection. The goal will be to project the map below with the following knowledge: its projection and its bounding box (I had it handy, and couldn't find a good example with a defined bounding box quick enough):
Before we get to the bounding box coordinates however, let's take a look at the projection. In this case it is something like:
PROJCS["ETRS89 / Austria Lambert",
GEOGCS["ETRS89",
DATUM["European_Terrestrial_Reference_System_1989",
SPHEROID["GRS 1980",6378137,298.257222101,
AUTHORITY["EPSG","7019"]],
AUTHORITY["EPSG","6258"]],
PRIMEM["Greenwich",0,
AUTHORITY["EPSG","8901"]],
UNIT["degree",0.01745329251994328,
AUTHORITY["EPSG","9122"]],
AUTHORITY["EPSG","4258"]],
UNIT["metre",1,
AUTHORITY["EPSG","9001"]],
PROJECTION["Lambert_Conformal_Conic_2SP"],
PARAMETER["standard_parallel_1",49],
PARAMETER["standard_parallel_2",46],
PARAMETER["latitude_of_origin",47.5],
PARAMETER["central_meridian",13.33333333333333],
PARAMETER["false_easting",400000],
PARAMETER["false_northing",400000],
AUTHORITY["EPSG","3416"],
AXIS["Y",EAST],
AXIS["X",NORTH]]
As we will be letting d3 choose the scale and center point based on the bounding box, we only care about a few parameters:
PARAMETER["standard_parallel_1",49],
PARAMETER["standard_parallel_2",46],
These are the two secant lines, where the map projection intercepts the surface of the earth.
PARAMETER["central_meridian",13.33333333333333],
This is the central meridian, the number we will use for rotating the projection along the x axis (as one will do for all conical projections that come to mind).
And most importantly:
PROJECTION["Lambert_Conformal_Conic_2SP"],
This line gives us our projection family/type.
Altogether this gives us something like:
d3.geoConicConformal()
.rotate([-13.33333,0]
.parallels([46,49])
Now, the bounding box, which is defined by these limits:
East: 17.2 degrees
West: 9.3 degrees
North: 49.2 degrees
South: 46.0 degrees
The .fitExtent (and .fitSize) methods take a geojson object and translate and scale the projection appropriately. I'll use .fitSize here as it skips margins around the bounds (fitExtent allows provision of margins, that's the only difference). So we need to create a geojson object with those bounds:
var bbox = {
"type": "Polygon",
"coordinates": [
[
[9.3, 49.2], [17.2, 49.2], [17.2, 46], [9.3, 46], [9.3,49.2]
]
]
}
Remember to use the right hand rule, and to have your end point the same as your start point (endless grief otherwise).
Now all we have to do is call this method and we'll have our projection. Since I'm using an image to validate my projection parameters, I know the aspect ratio I want. If you don't know the aspect ratio, you may have some excess width or height. This gives me something like:
var projection = d3.geoConicConformal()
.parallels([46,49])
.rotate([-13.333,0])
.fitSize([width,height],bbox)
And a happy looking final product like (keeping in mind a heavily downsampled world topojson):

Best Elliptical Fit for irregular shapes in an image

I have an image with arbitrary regions shape (say objects), let's assume the background pixels are labeled as zeros whereas any object has a unique label (pixels of object 1 are labeled as 1, object 2 pixels are labeled as 2,...). Now for every object, I need to find the best elliptical fit of its pixels. This requires finding the center of the object, the major and minor axis, and the rotation angle. How can I find these?
Thank you;
Principal Component Analysis (PCA) is one way to go. See Wikipedia here.
The centroid is easy enough to find if your shapes are convex - just a weighted average of intensities over the xy positions - and PCA will give you the major and minor axes, hence the orientation.
Once you have the centre and axes, you have the basis for a set of ellipses that cover your shape. Extending the axes - in proportion - and testing each pixel for in/out, you can find the ellipse that just covers your shape. Or if you prefer, you can project each pixel position onto the major and minor axes and find the rough limits in one pass and then test in/out on "corner" cases.
It may help if you post an example image.
As you seem to be using Matlab, you can simply use the regionprops command, given that you have the Image Processing Toolbox.
It can extract all the information you need (and many more properties of image regions) and it will do the PCA for you, if the PCA-based based approach suits your needs.
Doc is here, look for the 'Centroid', 'Orientation', 'MajorAxisLength' and 'MinorAxisLength' parameters specifically.

Map image/texture to a predefined uneven surface (t-shirt with folds, mug, etc.)

Basically I was trying to achieve this: impose an arbitrary image to a pre-defined uneven surface. (See examples below).
-->
I do not have a lot of experience with image processing or 3D algorithms, so here is the best method I can think of so far:
Predefine a set of coordinates (say if we have a 10x10 grid, we have 100 coordinates that starts with (0,0), (0,10), (0,20), ... etc. There will be 9x9 = 81 grids.
Record the transformations for each individual coordinate on the t-shirt image e.g. (0,0) becomes (51,31), (0, 10) becomes (51, 35), etc.
Triangulate the original image into 81x2=162 triangles (with 2 triangles for each grid). Transform each triangle of the image based on the coordinate transformations obtained in Step 2 and draw it on the t-shirt image.
Problems/questions I have:
I don't know how to smooth out each triangle so that the image on t-shirt does not look ragged.
Is there a better way to do it? I want to make sure I'm not reinventing the wheels here before I proceed with an implementation.
Thanks!
This is called digital image warping. There was a popular graphics text on it in the 1990s (probably from somebody's thesis). You can also find an article on it from Dr. Dobb's Journal.
Your process is essentially correct. If you work pixel by pixel, rather than trying to use triangles, you'll avoid some of the problems you're facing. Scan across the pixels in target bitmap, and apply the local transformation based on the cell you're in to determine the coordinate of the corresponding pixel in the source bitmap. Copy that pixel over.
For a smoother result, you do your coordinate transformations in floating point and interpolate the pixel values from the source image using something like bilinear interpolation.
It's not really a solution for the problem, it's just a workaround :
If you have the 3D model that represents the T-Shirt.
you can use directX\OpenGL and put your image as a texture of the t-shirt.
Then you can ask it to render the picture you want from any point of view.

Mapping corners to arbitrary positions using Direct2D

I'm using WIC and Direct2D (via SharpDX) to composite photos into video frames. For each frame I have the exact coordinates where each corner will be found. While the photos themselves are a standard aspect ratio (e.g. 4:3 or 16:9) the insertion points are not -- they may be rotated, scaled, and skewed.
Now I know in Direct2D I can apply matrix transformations to accomplish this... but I'm not exactly sure how. The examples I've seen are more about applying specific transformations (e.g. rotate 30 degrees) than trying to match an exact destination.
Given that I know the exact coordinates (A,B,C,D) above, is there an easy way to map the source image onto the target? Alternately how would I generate the matrix given the source and destination coordinates?
If Direct3D is an option, all you will need to do is to render the quadrilateral as two triangles (with the frog texture mapped onto it).
To make sure there are no artifacts, render the quad as an indexed mesh, like in the example here (note that it shares vertex 0 and vertex 2 across both triangles). Of course, you can replace the actual vertex coordinates with A, B, C and D.
To begin, you can check out these tutorials for SlimDX, an excellent set of .NET bindings to DirectX.
It is not really possible to achieve this with Direct2D. That could be possible with Direct2D1.1 (from Win8 Metro) with a custom vertex shader, but in the end, as ananthonline suggest, It will be much easier to do it with Direct3D11.
Also you can use triangle strip primitives that are easier to setup (you don't need to create an index buffer). For the coordinates, you can directly sends coordinates to a vertex shader without any transforms (the vertex shaders will copy input SV_POSITION directly to the pixel shader). You just have to map your coordinates into x [-1,1] and y[-1,1]. I suggest you to start with SharpDX MiniCubeTexture sample, change the matrix to perform an orthonormal projection (instead of the sample perspective).

Resources