How to increase the coordinate resolution of a d3-geo chart - d3.js

I have a GeoJSON file with small details and features that I want to render using D3. Unfortunately, important details are lost because D3
removes polygon coordinate pairs that are closely spaced.
I've set up a small example to show this. Both links use the exact same GeoJSON data, rendered with both D3-geo and mapbox through github.
Specifically, notice the two areas marked by the red circles.
https://bl.ocks.org/alvra/eebb06be793bc06ff3ae01e6945298b6
https://gist.github.com/alvra/eebb06be793bc06ff3ae01e6945298b6
The top one one marks a part of polygon that is rounded using many closely spaced coordinate pairs, but D3 removes most points and just draws a rough square end.
The lower red circle marks a tiny triangle that is removed altogether. The adjacent polygons should touch exactly, but are also affected by D3's loss of precision.
I haven't found any documentation about D3's coordinate precision or a (configurable) feature size limit.
I've tried decreasing D3-geo's EPSILON and related EPSILON2 values and that removes this problem (for me), although I'm sure even smaller features will still be affected.
Assuming this is related to the fact that D3 uses proper geodesics for polygon segments, while the other mapping libraries just draw straight lines (in the output coordinate space),
I was hoping that this process can only introduce new points.
I haven't been able to find other users experiencing similar problems with small features, although I'm surprised this has never come up before.
Does anyone have an idea about the proper way to deal with this?
Through epsilon, I've narrowed the problem down to this use of pointEqual(). This indicates the problem is with clipCircle considering closely spaced coordinates equal and removes them.
Indeed, if I disable circular clipping projection.clipAngle(null), the problem disappears.

Related

How to use a D3 packing layout or force layout across a horizontal axis? [duplicate]

I got a data set that where each sample has a size (0-1000) and a value (grade 1-5). I want to visualise the data with circles of different sizes along a line (domain axis), much like:
http://www.nytimes.com/interactive/2013/05/25/sunday-review/corporate-taxes.html?_r=1&
(note that circles even with the same effective taxrate do not overlap)
Example data:
sample 1: size 300 value 3.2
sample 2: size 45 value 3.8
sample 3: size 4400 value 4.0
sample 5: size 233 value 0.2
sample 6: size 4000 value 4.2
How can the data above be visualised using circles on a line (size decides diameter, value decides approximate position on the line) so that circles do not overlap?
I've been looking at D3's packing layout, but from what I can tell it doesn't support this out of the box. Anyone got any ideas on how to approach this?
Oooh, this one was a puzzle...
If you look at the code for the NYTimes graphic, it uses pre-computed coordinates in the data file, so that's not much use.
However, there's an unused variable declaration at the top of the script that hints that the original version used d3.geom.quadtree to lay out the circles. The quadtree isn't actually a layout method; it is used to create a search tree of adjacent nodes, so that when you need to find a node in a given area you don't have to search through the whole set. Example here.
The quadtree can therefore be used to identify which of your datapoints might be overlapping each other on the x-axis. Then you have to figure out how much you need to offset them in order to avoid that overlap. The variable radii complicate both functions...
I've got a test case implemented here:
http://fiddle.jshell.net/6cW9u/5/
The packing algorithm isn't perfect: I always add new circles to the outside of existing circles, without testing whether they could possibly fit closer in, so sometimes you get significant extra whitespace when it is just the far edges of circles bumping into each other. (Run it a few times to get an idea of the possibilities -- note that I've got x-variables distributed as random normal and r-variables distributed as random uniform.) I also got a stack overflow on the recursive methods during one iteration with N=100 -- the random distribution clearly wasn't distributed well enough for the quadtree optimization.
But it's got the basic functionality. Leave a comment here if you can't follow the logic of my code comments.
--ABR
Update
New fiddle here: http://fiddle.jshell.net/6cW9u/8/
After a lot of re-arranging, I got the packing algorithm to search for gaps between existing bubbles. I've got the sort order switched (so that biggest circles get added first) to show off how little circles can get added in the gaps -- although as I mention in the code comments, this reduces the efficiency of the quadtree search.
Also added various decoration and transition so you can clearly see how the circles are being positioned, and set the r-scale to be square root, so the area (not radius) is proportional to the value in the data (which is more realistic, and what the O.P. asked for).
D3's packing layout is not the answer here. It places circles in a spiral fashion around the existing group. Here's me reverse-engineering the algorithm behind packing layout:
I would suggest a force layout-based approach. That way, you can give your nodes force towards a gravitational center, and then let gravity do its thing.
Force layouts (e.g. Clustered Force Layout I) are usually animations, so you'll want to apply a static force layout.
I've wrapped up this approach in an example block, which looks like this:

Is it possible to import a Collada model that aligns to pixels?

Assume I have a model that is simply a cube. (It is more complicated than a cube, but for the purposes of this discussion, we will simplify.)
So when I am in Sketchup, the cube is Xmm by Xmm by Xmm, where X is an integer. I then export the a Collada file and subsequently load that into threejs.
Now if I look at the geometry bounding box, the values are floats, not integers.
So now assume I am putting cubes next to each other with a small space in between say 1 pixel. Because screens can't draw half pixels, sometimes I see one pixel and sometimes I see two, which causes a lack of uniformity.
I think I can resolve this satisfactorily if I can somehow get the imported model to have integer dimensions. I have full access to all parts of the model starting with Sketchup, so any point in the process is fair game.
Is it possible?
Thanks.
Clarification: My app will have two views. The view that this is concerned with is using an OrthographicCamera that is looking straight down on the pieces, so this is really a 2D view. For purposes of this question, after importing the model, it should look like a grid of squares with uniform spacing in between.
UPDATE: I would ask that you please not respond unless you can provide an actual answer. If I need help finding a way to accomplish something, I will post a new question. For this question, I am only interested in knowing if it is possible to align an imported Collada model to full pixels and if so how. At this point, this is mostly to serve my curiosity and increase my knowledge of what is and isn't possible. Thank you community for your kind help.
Now you have to learn this thing about 3D programming: numbers don't mean anything :)
In the real world 1mm, 2.13cm and 100Kg specify something that can be measured and reproduced. But for a drawing library, those numbers don't mean anything.
In a drawing library, 3D points are always represented with 3 float values.You submit your points to the library, it transforms them in 2D points (they must be viewed on a 2D surface), and finally these 2D points are passed to a rasterizer which translates floating point values into integer values (the screen has a resolution of NxM pixels, both N and M being integers) and colors the actual pixels.
Your problem simply is not a problem. A cube of 1mm really means nothing, because if you are designing an astronomic application, that object will never be seen, but if it's a microscopic one, it will even be way larger than the screen. What matters are the coordinates of the point, and the scale of the overall application.
Now back to your cubes, don't try to insert 1px in between two adjacent ones. Your cubes are defined in terms of mm, so try to choose the distance in mm appropriate to your world, and let the rasterizer do its job and translate them to pixels.
I have been informed by two co-workers that I tracked down that this is indeed impossible using normal means.

Recognizing distortions in a regular grid

To give you some background as to what I'm doing: I'm trying to quantitatively record variations in flow of a compressible fluid via image analysis. One way to do this is to exploit the fact that the index of refraction of the fluid is directly related to its density. If you set up some kind of image behind the flow, the distortion in the image due to refractive index changes throughout the fluid field leads you to a density gradient, which helps to characterize the flow pattern.
I have a set of routines that do this successfully with a regular 2D pattern of dots. The dot pattern is slightly distorted, and by comparing the position of the dots in the distorted image with that in the non-distorted image, I get a displacement field, which is exactly what I need. The problem with this method is resolution. The resolution is limited to the number of dots in the field, and I'm exploring methods that give me more data.
One idea I've had is to use a regular grid of horizontal and vertical lines. This image will distort the same way, but instead of getting only the displacement of a dot, I'll have the continuous distortion of a grid. It seems like there must be some standard algorithm or procedure to compare one geometric grid to another and infer some kind of displacement field. Nonetheless, I haven't found anything like this in my research.
Does anyone have some ideas that might point me in the right direction? FYI, I am not a computer scientist -- I'm an engineer. I say that only because there may be some obvious approach I'm neglecting due to coming from a different field. But I can program. I'm using MATLAB, but I can read Python, C/C++, etc.
Here are examples of the type of images I'm working with:
Regular: Distorted:
--------
I think you are looking for the Digital Image Correlation algorithm.
Here you can see a demo.
Here is a Matlab Implementation.
From Wikipedia:
Digital Image Correlation and Tracking (DIC/DDIT) is an optical method that employs tracking & image registration techniques for accurate 2D and 3D measurements of changes in images. This is often used to measure deformation (engineering), displacement, and strain, but it is widely applied in many areas of science and engineering.
Edit
Here I applied the DIC algorithm to your distorted image using Mathematica, showing the relative displacements.
Edit
You may also easily identify the maximum displacement zone:
Edit
After some work (quite a bit, frankly) you can come up to something like this, representing the "displacement field", showing clearly that you are dealing with a vortex:
(Darker and bigger arrows means more displacement (velocity))
Post me a comment if you are interested in the Mathematica code for this one. I think my code is not going to help anybody else, so I omit posting it.
I would also suggest a line tracking algorithm would work well.
Simply start at the first pixel line of the image and start following each of the vertical lines downwards (You just need to start this at the first line to get the starting points. This can be done by a simple pattern that moves orthogonally to the gradient of that line, ergo follows a line. When you reach a crossing of a horizontal line you can measure that point (in x,y coordinates) and compare it to the corresponding crossing point in your distorted image.
Since your grid is regular you know that the n'th measured crossing point on the m'th vertical black line are corresponding in both images. Then you simply compare both points by computing their distance. Do this for each line on your grid and you will get, by how far each crossing point of the grid is distorted.
This following a line algorithm is also used in basic Edge linking algorithms or the Canny Edge detector.
(All this are just theoretic ideas and I cannot provide you with an algorithm to it. But I guess it should work easily on distorted images like you have there... but maybe it is helpful for you)

Best approach for specific Object/Image Recognition task?

I'm searching for an certain object in my photograph:
Object: Outline of a rectangle with an X in the middle. It looks like a rectangular checkbox. That's all. So, no fill, just lines. The rectangle will have the same ratios of length to width but it could be any size or any rotation in the photograph.
I've looked a whole bunch of image recognition approaches. But I'm trying to determine the best for this specific task. Most importantly, the object is made of lines and is not a filled shape. Also, there is no perspective distortion, so the rectangular object will always have right angles in the photograph.
Any ideas? I'm hoping for something that I can implement fairly easily.
Thanks all.
You could try using a corner detector (e.g. Harris) to find the corners of the box, the ends and the intersection of the X. That simplifies the problem to finding points in the right configuration.
Edit (response to comment):
I'm assuming you can find the corner points in your image, the 4 corners of the rectangle, the 4 line endings of the X and the center of the X, plus a few other corners in the image due to noise or objects in the background. That simplifies the problem to finding a set of 9 points in the right configuration, out of a given set of points.
My first try would be to look at each corner point A. Then I'd iterate over the points B close to A. Now if I assume that (e.g.) A is the upper left corner of the rectangle and B is the lower right corner, I can easily calculate, where I would expect the other corner points to be in the image. I'd use some nearest-neighbor search (or a library like FLANN) to see if there are corners where I'd expect them. If I can find a set of points that matches these expected positions, I know where the symbol would be, if it is present in the image.
You have to try if that is good enough for your application. If you have too many false positives (sets of corners of other objects that accidentially form a rectangle + X), you could check if there are lines (i.e. high contrast in the right direction) where you would expect them. And you could check if there is low contrast where there are no lines in the pattern. This should be relatively straightforward once you know the points in the image that correspond to the corners/line endings in the object you're looking for.
I'd suggest the Generalized Hough Transform. It seems you have a fairly simple, fixed shape. The generalized Hough transform should be able to detect that shape at any rotation or scale in the image. You many need to threshold the original image, or pre-process it in some way for this method to be useful though.
You can use local features to identify the object in image. Feature detection wiki
For example, you can calculate features on some referent image which contains only the object you're looking for and save the results, let's say, to a plain text file. After that you can search for the object just by comparing newly calculated features (on images with some complex scenes containing the object) with the referent ones.
Here's some good resource on local features:
Local Invariant Feature Detectors: A Survey

Raytracing (LoS) on 3D hex-like tile maps

Greetings,
I'm working on a game project that uses a 3D variant of hexagonal tile maps. Tiles are actually cubes, not hexes, but are laid out just like hexes (because a square can be turned to a cube to extrapolate from 2D to 3D, but there is no 3D version of a hex). Rather than a verbose description, here goes an example of a 4x4x4 map:
(I have highlighted an arbitrary tile (green) and its adjacent tiles (yellow) to help describe how the whole thing is supposed to work; but the adjacency functions are not the issue, that's already solved.)
I have a struct type to represent tiles, and maps are represented as a 3D array of tiles (wrapped in a Map class to add some utility methods, but that's not very relevant).
Each tile is supposed to represent a perfectly cubic space, and they are all exactly the same size. Also, the offset between adjacent "rows" is exactly half the size of a tile.
That's enough context; my question is:
Given the coordinates of two points A and B, how can I generate a list of the tiles (or, rather, their coordinates) that a straight line between A and B would cross?
That would later be used for a variety of purposes, such as determining Line-of-sight, charge path legality, and so on.
BTW, this may be useful: my maps use the (0,0,0) as a reference position. The 'jagging' of the map can be defined as offsetting each tile ((y+z) mod 2) * tileSize/2.0 to the right from the position it'd have on a "sane" cartesian system. For the non-jagged rows, that yields 0; for rows where (y+z) mod 2 is 1, it yields 0.5 tiles.
I'm working on C#4 targeting the .Net Framework 4.0; but I don't really need specific code, just the algorithm to solve the weird geometric/mathematical problem. I have been trying for several days to solve this at no avail; and trying to draw the whole thing on paper to "visualize" it didn't help either :( .
Thanks in advance for any answer
Until one of the clever SOers turns up, here's my dumb solution. I'll explain it in 2D 'cos that makes it easier to explain, but it will generalise to 3D easily enough. I think any attempt to try to work this entirely in cell index space is doomed to failure (though I'll admit it's just what I think and I look forward to being proved wrong).
So you need to define a function to map from cartesian coordinates to cell indices. This is straightforward, if a little tricky. First, decide whether point(0,0) is the bottom left corner of cell(0,0) or the centre, or some other point. Since it makes the explanations easier, I'll go with bottom-left corner. Observe that any point(x,floor(y)==0) maps to cell(floor(x),0). Indeed, any point(x,even(floor(y))) maps to cell(floor(x),floor(y)).
Here, I invent the boolean function even which returns True if its argument is an even integer. I'll use odd next: any point point(x,odd(floor(y)) maps to cell(floor(x-0.5),floor(y)).
Now you have the basics of the recipe for determining lines-of-sight.
You will also need a function to map from cell(m,n) back to a point in cartesian space. That should be straightforward once you have decided where the origin lies.
Now, unless I've misplaced some brackets, I think you are on your way. You'll need to:
decide where in cell(0,0) you position point(0,0); and adjust the function accordingly;
decide where points along the cell boundaries fall; and
generalise this into 3 dimensions.
Depending on the size of the playing field you could store the cartesian coordinates of the cell boundaries in a lookup table (or other data structure), which would probably speed things up.
Perhaps you can avoid all the complex math if you look at your problem in another way:
I see that you only shift your blocks (alternating) along the first axis by half the blocksize. If you split up your blocks along this axis the above example will become (with shifts) an (9x4x4) simple cartesian coordinate system with regular stacked blocks. Now doing the raytracing becomes much more simple and less error prone.

Resources