I want to preface by saying that I am not great at linear algebra and really only know the basics. I am trying to map a 3D line (given as latitude and longitude) to a height map of the corresponding region the LLA line comes from with three.js. I read over this post here, and think I get the idea behind LLA to ECEF conversion, the problem I am running into is taking the ECEF vectors and mapping them to Euclidean space to match up with the height map.
Ex.
Here is the height map in Euclidean space, the vertical pink line is 1 unit away from the origin
Using the function from the linked thread, I am able to convert a LLA pair to a vector:
LLA: [39.5990128, -106.5210928] => ECEF: [2061284.281976205, 20613.52994200761,-6035835.798133525] => ??? Line up with the height map, but in Euclidean space? ???
I used tangrams to obtain the height map, which gives you the LLA of the center of the screen. I am sure this has to be factored into how the mapping takes place, but I am unsure on where to even start. So far I have just been playing with the scale values in the renderer to try to match the two.
Is there more data I need to perform this mapping (i.e. the LLA bound of the height map)? What is this operation known as specifically besides "mapping"? What is the right place to start to solve this problem?
Related
I need to match a path recorded by lidar (x/y/height) onto a map tile (pixel height field)
I can assume the problem is 2.5D (ie a unique height for each point, no caverns) and the region is small enough that the grid is uniform (don't need to consider curvature). Naturally the track data is noisy and I don't have any known locations in advance.
Rather than do a full 3D point based Iterative Closest Point are there any simple algorithms for purely surface path matching I should take a look at ?
Specifically it seems to be an image processing problem(x,y,height=intensity) so some sort of snake matching algorithm?
Instead of brute force (calculate error for full path from each starting point) you can only expand the best points and potentially save a lot of work.
Normalize:
Subtract (pathMinX, pathMinY) from all points in the path.
Subtract (gridMinX, gridMinY) from all points in the grid.
Find pathMaxX, pathMaxY, gridMaxX, gridMaxY.
deltaMaxX = gridMaxX - pathMaxX, deltaMaxY = gridMaxY - pathMaxY.
Create an array or list with (deltaMaxX + 1) * (deltaMaxY + 1) nodes for all the combinations of deltaX and deltaY. Each node has to hold the following information:
index, initialize with 0
deltaX and deltaY, initialize with the loop counters
error, initialize with (path[0].height - grid[path[0].x + deltaX][path[0].y + deltaY].height)^2
Sort the array by error.
While arr[0].error <= arr[1].error:
arr[0].index++
if arr[0].index == n: return (arr[0].deltaX, arr[0].deltaY)
arr[0].error += (path[arr[0].index].height - grid[path[arr[0].index].x + arr[0].deltaX][path[arr[0].index].y + arr[0].deltaY].height)^2
Remove node 0 and insert it at the correct position, so that the array becomes sorted again.
Repeat step 6. and 7. until a solution was returned.
You can (should, I think it's worth it) further improve the algorithm if you first sort the points in the path by extreme heights (lowest and highest points first and then moving towards the average height in regard to the grid, e.g. sort by abs(height - averageGridHeight) descending). Like this large errors are produced early and therefore branches can be cut a lot earlier.
Purely, theoretical, but what if you encode the 3D data as grayscale images (3D path as grayscale image, height map is already something like that probably, just need to ensure scales make sense).
If you have the 3D path as a grayscale image and the height map as a grayscale image perhaps you could do a needle in haystack search using computer vision techniques. For example in, OpenCV there are a couple of techniques for finding a subimage in a larger image:
Template Matching - (overly simplified) uses a sliding window doing pixel comparison
Chamfer Matching - making more use of edges - probably more suitable for your goal. bare in mind I ran into an allocation bug last time using this: it does work in the end, but needs a bit of love (doing malloc fixes and keeping track of the cost to get read of false positives) but there are options to handle the scale difference
Template matching examples:
Chamfer matching example:
On a shape from a logical image, I am trying to extract the field of view from any point inside the shape on matlab :
I tried something involving to test each line going through the point but it is really really long.(I hope to do it for each points of the shape or at least each point of it's contour wich is quite a few times)
I think a faster method would be working iteratively by the expansion of a disk from the considered point but I am not sure how to do it.
How can I find this field of view in an efficient way?
Any ideas or solution would be appreciated, thanks.
Here is a possible approach (the principle behind the function I wrote, available on Matlab Central):
I created this test image and an arbitrary point of view:
testscene=zeros(500);
testscene(80:120,80:120)=1;
testscene(200:250,400:450)=1;
testscene(380:450,200:270)=1;
viewpoint=[250, 300];
imsize=size(testscene); % checks the size of the image
It looks like this (the circle marks the view point I chose):
The next line computes the longest distance to the edge of the image from the viewpoint:
maxdist=max([norm(viewpoint), norm(viewpoint-[1 imsize(2)]), norm(viewpoint-[imsize(1) 1]), norm(viewpoint-imsize)]);
angles=1:360; % use smaller increment to increase resolution
Then generate a set of points uniformly distributed around the viewpoint.:
endpoints=bsxfun(#plus, maxdist*[cosd(angles)' sind(angles)'], viewpoint);
for k=1:numel(angles)
[CX,CY,C] = improfile(testscene,[viewpoint(1), endpoints(k,1)],[viewpoint(2), endpoints(k,2)]);
idx=find(C);
intersec(k,:)=[CX(idx(1)), CY(idx(1))];
end
What this does is drawing lines from the view point to each directions specified in the array angles and look for the position of the intersection with an obstacle or the edge of the image.
This should help visualizing the process:
Finally, let's use the built-in roipoly function to create a binary mask from a set of coordinates:
FieldofView = roipoly(testscene,intersec(:,1),intersec(:,2));
Here is how it looks like (obstacles in white, visible field in gray, viewpoint in red):
Trying to implement EKG style "heartbeat" chart from a design and I'm having a hard time getting D3 to draw a path like I need.
The design spec states that the graph needs to return to nuetral/zero point between each and every data point, and that the curved path from the zero point should be close to the data point itself and rise sharply. See the attached images below
Here is the design....
And here is my attempt to match the curve with dummy data (black circle data points)...
The graph has a time scale X axis and a linear Y axis that ranges from 0 to 2 (my data points are 0,1, or 2 respectively). The line is using 'monotone' interpolation which is the least terrible looking.
Question:
Is there a better way to get this appearance without dummy data points?
Question-behind-the-question:
What is the best way to get D3 draw a custom paths (e.g. from a function)?
Sub-question:
Why does the monotone interpolation curve the path inward so sharply between the last 2 data points?
Any help is appreciated! The designers and client won't budge on this one, so I have to get it as close possible :(
I'm searching for an certain object in my photograph:
Object: Outline of a rectangle with an X in the middle. It looks like a rectangular checkbox. That's all. So, no fill, just lines. The rectangle will have the same ratios of length to width but it could be any size or any rotation in the photograph.
I've looked a whole bunch of image recognition approaches. But I'm trying to determine the best for this specific task. Most importantly, the object is made of lines and is not a filled shape. Also, there is no perspective distortion, so the rectangular object will always have right angles in the photograph.
Any ideas? I'm hoping for something that I can implement fairly easily.
Thanks all.
You could try using a corner detector (e.g. Harris) to find the corners of the box, the ends and the intersection of the X. That simplifies the problem to finding points in the right configuration.
Edit (response to comment):
I'm assuming you can find the corner points in your image, the 4 corners of the rectangle, the 4 line endings of the X and the center of the X, plus a few other corners in the image due to noise or objects in the background. That simplifies the problem to finding a set of 9 points in the right configuration, out of a given set of points.
My first try would be to look at each corner point A. Then I'd iterate over the points B close to A. Now if I assume that (e.g.) A is the upper left corner of the rectangle and B is the lower right corner, I can easily calculate, where I would expect the other corner points to be in the image. I'd use some nearest-neighbor search (or a library like FLANN) to see if there are corners where I'd expect them. If I can find a set of points that matches these expected positions, I know where the symbol would be, if it is present in the image.
You have to try if that is good enough for your application. If you have too many false positives (sets of corners of other objects that accidentially form a rectangle + X), you could check if there are lines (i.e. high contrast in the right direction) where you would expect them. And you could check if there is low contrast where there are no lines in the pattern. This should be relatively straightforward once you know the points in the image that correspond to the corners/line endings in the object you're looking for.
I'd suggest the Generalized Hough Transform. It seems you have a fairly simple, fixed shape. The generalized Hough transform should be able to detect that shape at any rotation or scale in the image. You many need to threshold the original image, or pre-process it in some way for this method to be useful though.
You can use local features to identify the object in image. Feature detection wiki
For example, you can calculate features on some referent image which contains only the object you're looking for and save the results, let's say, to a plain text file. After that you can search for the object just by comparing newly calculated features (on images with some complex scenes containing the object) with the referent ones.
Here's some good resource on local features:
Local Invariant Feature Detectors: A Survey
Greetings,
I'm working on a game project that uses a 3D variant of hexagonal tile maps. Tiles are actually cubes, not hexes, but are laid out just like hexes (because a square can be turned to a cube to extrapolate from 2D to 3D, but there is no 3D version of a hex). Rather than a verbose description, here goes an example of a 4x4x4 map:
(I have highlighted an arbitrary tile (green) and its adjacent tiles (yellow) to help describe how the whole thing is supposed to work; but the adjacency functions are not the issue, that's already solved.)
I have a struct type to represent tiles, and maps are represented as a 3D array of tiles (wrapped in a Map class to add some utility methods, but that's not very relevant).
Each tile is supposed to represent a perfectly cubic space, and they are all exactly the same size. Also, the offset between adjacent "rows" is exactly half the size of a tile.
That's enough context; my question is:
Given the coordinates of two points A and B, how can I generate a list of the tiles (or, rather, their coordinates) that a straight line between A and B would cross?
That would later be used for a variety of purposes, such as determining Line-of-sight, charge path legality, and so on.
BTW, this may be useful: my maps use the (0,0,0) as a reference position. The 'jagging' of the map can be defined as offsetting each tile ((y+z) mod 2) * tileSize/2.0 to the right from the position it'd have on a "sane" cartesian system. For the non-jagged rows, that yields 0; for rows where (y+z) mod 2 is 1, it yields 0.5 tiles.
I'm working on C#4 targeting the .Net Framework 4.0; but I don't really need specific code, just the algorithm to solve the weird geometric/mathematical problem. I have been trying for several days to solve this at no avail; and trying to draw the whole thing on paper to "visualize" it didn't help either :( .
Thanks in advance for any answer
Until one of the clever SOers turns up, here's my dumb solution. I'll explain it in 2D 'cos that makes it easier to explain, but it will generalise to 3D easily enough. I think any attempt to try to work this entirely in cell index space is doomed to failure (though I'll admit it's just what I think and I look forward to being proved wrong).
So you need to define a function to map from cartesian coordinates to cell indices. This is straightforward, if a little tricky. First, decide whether point(0,0) is the bottom left corner of cell(0,0) or the centre, or some other point. Since it makes the explanations easier, I'll go with bottom-left corner. Observe that any point(x,floor(y)==0) maps to cell(floor(x),0). Indeed, any point(x,even(floor(y))) maps to cell(floor(x),floor(y)).
Here, I invent the boolean function even which returns True if its argument is an even integer. I'll use odd next: any point point(x,odd(floor(y)) maps to cell(floor(x-0.5),floor(y)).
Now you have the basics of the recipe for determining lines-of-sight.
You will also need a function to map from cell(m,n) back to a point in cartesian space. That should be straightforward once you have decided where the origin lies.
Now, unless I've misplaced some brackets, I think you are on your way. You'll need to:
decide where in cell(0,0) you position point(0,0); and adjust the function accordingly;
decide where points along the cell boundaries fall; and
generalise this into 3 dimensions.
Depending on the size of the playing field you could store the cartesian coordinates of the cell boundaries in a lookup table (or other data structure), which would probably speed things up.
Perhaps you can avoid all the complex math if you look at your problem in another way:
I see that you only shift your blocks (alternating) along the first axis by half the blocksize. If you split up your blocks along this axis the above example will become (with shifts) an (9x4x4) simple cartesian coordinate system with regular stacked blocks. Now doing the raytracing becomes much more simple and less error prone.