ClojureScript NVD3 full-height shaded intervals - d3.js

My graphs currently indicate "no data" by calculating the holes in my datasets, then generating a new fake dataset that ranges from nil to max(all-y-values), thus making it look like a full-height background. I make it an "area" dataset and apply an SVG pattern to make it look like this:
The problem arises when the y-axis scale is greater than the data, in other words when max(all-y-values) doesn't coincide with the top of the graph.
I need to make these shaded background intervals always the full height of the graph, and I'm willing to rethink the whole process of adding them.
My recent attempts have been trying to follow the "filling an area above the line" example described in D3 Tips and Tricks as well as the NVD3 documentation and various other resources, but I haven't seen anything working.
This is the "filling an area above the line" example from that link:
Any solution must either use ClojureScript and Om or at least be directly compatible with them, since I already have a proof of concept in the above links and I am looking for more.

Related

How to add multiple geojsons to a geochoropleth in dc.js?

I'm trying to create a geochoropleth that maps subregions, but also includes outlines of larger regions. (You can think of it like mapping counties, but then wanting to include thicker outlines of states). Not all subregions are part of larger regions that need to be outlined. (Most aren't.) You can see an example of what I'm trying to replicate here:
What's the best way to add this regional outline to my map? I've tried keeping the regions and subregions as two separate files, with two overlaygeojsons calls in my geochoropleth call (with added d3 styling to change the fill and stroke to just be an outline). But when I do - the projection of the regional outline layer is strangely offset from the lower one.
I've also considered having both sets of boundaries in just the one geojson. However, I wasn't sure how to work with this.
While it would be nice to be able to mouseover the boundaries of the larger regions and get a tooltip before crossing over into the individual subregions and getting their tooltips, this isn't a must. I could live with just outlines around the regions. Please advise on the best way to do this. Happy to provide more detail, and thanks so much!
EDIT: I discovered that I had a misplaced transform tag which is what offset the second layer. Fixed now!

How to increase the coordinate resolution of a d3-geo chart

I have a GeoJSON file with small details and features that I want to render using D3. Unfortunately, important details are lost because D3
removes polygon coordinate pairs that are closely spaced.
I've set up a small example to show this. Both links use the exact same GeoJSON data, rendered with both D3-geo and mapbox through github.
Specifically, notice the two areas marked by the red circles.
https://bl.ocks.org/alvra/eebb06be793bc06ff3ae01e6945298b6
https://gist.github.com/alvra/eebb06be793bc06ff3ae01e6945298b6
The top one one marks a part of polygon that is rounded using many closely spaced coordinate pairs, but D3 removes most points and just draws a rough square end.
The lower red circle marks a tiny triangle that is removed altogether. The adjacent polygons should touch exactly, but are also affected by D3's loss of precision.
I haven't found any documentation about D3's coordinate precision or a (configurable) feature size limit.
I've tried decreasing D3-geo's EPSILON and related EPSILON2 values and that removes this problem (for me), although I'm sure even smaller features will still be affected.
Assuming this is related to the fact that D3 uses proper geodesics for polygon segments, while the other mapping libraries just draw straight lines (in the output coordinate space),
I was hoping that this process can only introduce new points.
I haven't been able to find other users experiencing similar problems with small features, although I'm surprised this has never come up before.
Does anyone have an idea about the proper way to deal with this?
Through epsilon, I've narrowed the problem down to this use of pointEqual(). This indicates the problem is with clipCircle considering closely spaced coordinates equal and removes them.
Indeed, if I disable circular clipping projection.clipAngle(null), the problem disappears.

Algorithm for fitting text into an irregular shape

I'm sure this is already answered somewhere, but I just don't know the correct terminology to search for.
Context: I'm developing some code to generate a PDF that is using a fairly low-level library. So I'm having to write some basic text layout and fitting routines that will break on word boundaries and fit the text within defined constraints (e.g., in a column or around a fixed block).
I'd like to a find a reasonably efficient approach for fitting text around an arbitrary shape; eg something like this:
(This example was taken from this blog post: http://blog.amyworrall.com/post/11098565269/text-wrap-with-core-text, that was an answer to this question: Rendering CoreText within an irregular shape)
I'm guessing I need to break the text down into a series of boxes and it then becomes a geometry problem of fitting boxes into the shape, but I'm struggling to find good explanations of suitable algorithms or approaches for this. Delving into browser engine layout code to see how they do it is a case of getting lost in all the detail.

Convert polygons into mesh

I have a lot of polygons. Ideally, all the polygons must not overlap one other, but they can be located adjacent to one another.
But practically, I would have to allow for slight polygon overlap ( defined by a certain tolerance) because all these polygons are obtained from user hand drawing input, which is not as machine-precised as I want them to be.
My question is, is there any software library components that:
Allows one to input a range of polygons
Check if the polygons are overlapped more than a prespecified tolerance
If yes, then stop, or else, continue
Create mesh in terms of coordinates and elements for the polygons by grouping common vertex and edges together?
More importantly, link back the mesh edges to the original polygon(s)'s edge?
Or is there anyone tackle this issue before?
This issue is a daily "bread" of GIS applications - this is what is exactly done there. We also learned that at a GIS course. Look into GIS systems how they address this issue. E.g. ArcGIS define so called topology rules and has some functions to check if the edited features are topologically correct. See http://webhelp.esri.com/arcgisdesktop/9.2/index.cfm?TopicName=Topology_rules
This is pretty long, only because the question is so big. I've tried to group my comments based on your bullet points.
Components to draw polygons
My guess is that you'll have limited success without providing more information - a component to draw polygons will be very much coupled to the language and UI paradigm you are using for the rest of your project, ie. code for a web component will look very different to a native component.
Perhaps an alternative is to separate this element of the process out from the rest of what you're trying to do. There are some absolutely fantastic pre-existing editors that you can use to create 2d and 3d polygons.
Inkscape is an example of a vector graphics editor that makes it easy to enter 2d polygons, and has the advantage of producing output SVG, which is reasonably easy to parse.
In three dimensions Blender is an open source editor that can be used to produce arbitrary geometries that can be exported to a number of formats.
If you can use a google-maps API (possibly in an native HTML rendering control), and you are interested in adding spatial points on a map overlay, you may be interested in related click-to-draw polygon question on stackoverflow. From past experience, other map APIs like OpenLayers support similar approaches.
Check whether polygons are overlapped
Thomas T made the point in his answer, that there are families of related predicates that can be used to address this and related queries. If you are literally just looking for overlaps and other set theoretic operations (union, intersection, set difference) in two dimensions you can use the General Polygon Clipper
You may also need to consider the slightly more generic problem when two polygons that don't overlap or share a vertex when they should. You can use a Minkowski sum to dilate (enlarge) two and three dimensional polygons to avoid such problems. The Computational Geometry Algorithms Library has robust implementations of these algorithms.
I think that it's more likely that you are really looking for a piece of software that can perform vertex welding, Christer Ericson's book Real-time Collision Detection includes extensive and very readable description of the basics in this field, and also on related issues of edge snapping, crack detection, T-junctions and more. However, even though code snippets are included for that book, I know of no ready made library that addresses these problems, in particular, no complete implementation is given for anything beyond basic vertex welding.
Obviously all 3D packages (blender, maya, max, rhino) all include built in software and tools to solve this problem.
Group polygons based on vertices
From past experience, this turned out to be one of the most time consuming parts of developing software to solve problems in this area. It requires reasonable understanding of graph theory and algorithms to traverse boundaries. It is worth relying upon a solid geometry or graph library to do the heavy lifting for you. In the past I've had success with igraph.
Link the updated polygons back to the originals.
Again, from past experience, this is just a case of careful bookkeeping, and some very careful design of your mesh classes up-front. I'd like to give more advice, but even after spending a big chunk of the last six months on this, I'm still struggling to find a "nice" way to do this.
Other Comments
If you're interacting with users, I would strongly recommend avoiding this issue where possible by using an editor that "snaps", rounding all user entered points onto a grid. This will hopefully significantly reduce the amount of work that you have to do.
Yes, you can use OGR. It has python bindings. Specifically, the Geometry class has an Intersects method. I don't fully understand what you want in points 4 and 5.

Object detection + segmentation

I 'm trying to find an efficient way of acceptable complexity to
detect an object in an image so I can isolate it from its surroundings
segment that object to its sub-parts and label them so I can then fetch them at will
It's been 3 weeks since I entered the image processing world and I've read about so many algorithms (sift, snakes, more snakes, fourier-related, etc.), and heuristics that I don't know where to start and which one is "best" for what I'm trying to achieve. Having in mind that the image dataset in interest is a pretty large one, I don't even know if I should use some algorithm implemented in OpenCV or if I should implement one my own.
Summarize:
Which methodology should I focus on? Why?
Should I use OpenCV for that kind of stuff or is there some other 'better' alternative?
Thank you in advance.
EDIT -- More info regarding the datasets
Each dataset consists of 80K images of products sharing the same
concept e.g. t-shirts, watches, shoes
size
orientation (90% of them)
background (95% of them)
All pictures in each datasets look almost identical apart from the product itself, apparently. To make things a little more clear, let's consider only the 'watch dataset':
All the pictures in the set look almost exactly like this:
(again, apart form the watch itself). I want to extract the strap and the dial. The thing is that there are lots of different watch styles and therefore shapes. From what I've read so far, I think I need a template algorithm that allows bending and stretching so as to be able to match straps and dials of different styles.
Instead of creating three distinct templates (upper part of strap, lower part of strap, dial), it would be reasonable to create only one and segment it into 3 parts. That way, I would be confident enough that each part was detected with respect to each other as intended to e.g. the dial would not be detected below the lower part of the strap.
From all the algorithms/methodologies I've encountered, active shape|appearance model seem to be the most promising ones. Unfortunately, I haven't managed to find a descent implementation and I'm not confident enough that that's the best approach so as to go ahead and write one myself.
If anyone could point out what I should be really looking for (algorithm/heuristic/library/etc.), I would be more than grateful. If again you think my description was a bit vague, feel free to ask for a more detailed one.
From what you've said, here are a few things that pop up at first glance:
Simplest thing to do it binarize the image and do Connected Components using OpenCV or CvBlob library. For simple images with non-complex background this usually yeilds objects
HOwever, looking at your sample image, texture-based segmentation techniques may work better - the watch dial, the straps and the background are wisely variant in texture/roughness, and this could be an ideal way to separate them.
The roughness of a portion can be easily found by the Eigen transform (explained a bit on SO, check the link to the research paper provided there), then the Mean Shift filter can be applied on the output of the Eigen transform. This will give regions clearly separated according to texture. Both the pyramidal Mean Shift and finding eigenvalues by SVD are implemented in OpenCV, so unless you can optimize your own code its better (and easier) to use inbuilt functions (if present) as far as speed and efficiency is concerned.
I think I would turn the problem around. Instead of hunting for the dial, I would use a set of robust features from the watch to 'stitch' the target image onto a template. The first watch has a set of squares in the dial that are white, the second watch has a number of white circles. I would per type of watch:
Segment out the squares or circles in the dial. Segmentation steps can be tricky as they are usually both scale and light dependent
Estimate the centers or corners of the above found feature areas. These are the new feature points.
Use the Hungarian algorithm to match features between the template watch and the target watch. Alternatively, one can take the surroundings of each feature point in the original image and match these using cross correlation
Use matching features between the template and the target to estimate scaling, rotation and translation
Stitch the image
As the image is now in a known form, one can extract the regions simply via pre set coordinates

Resources