I noticed this fantastic example of D3's cluster layout. Instead of having a bounding box; is it possible to force cluster them into a polygon? http://codepen.io/zslabs/pen/MKaRNJ is an example of the polygon shape, but I'm looking for the added benefit of collision detection as well as a performant way of mapping the data. Thanks so much!
Update
https://github.com/d3/d3-shape looks like an interesting library about creating these shapes, I still have not seen an example of plotting and spreading within a defined polygon.
.
I was also looking for the same thing and I found this similar thread: Force chart d3.js inside a triangle.
The most voted answer has a method to detect collisions inside a triangle and also propose a generalized version to make it work with a polygon. You can see it in this demo.
One thing that the answer does not mention and that might be useful to you is to compute the center of your different polygons in order to make the different force layouts centered inside those polygons and I would suggest to use polygonCentroid for that.
var polygon = require('d3-polygon');
var polygon_data = [ [0,0], [10, 0], [10, 10], [0, 10]]; // a small box
var centroid = polygon.polygonCentroid(polygon_data); // [5, 5]
I would definitely love to see this polygon constraint as a feature in d3 but it may be too specific :/
Update:
Just a small correction: the proprosed solution takes account of the polygon's centroid contrary to what I said. My bad.
Update 2:
I've created a block with an implementation for the polygon collision detection: http://bl.ocks.org/pbellon/4b875d2ab7019c0029b636523b34e074.
It uses the collision detection mentionned in the SO's answer I linked & I used it to create a "force" like forceCollide on d3.v4.
It's not perfect but I had a lot of trouble to tweak the way nodes can be repelled from the polygon's border... If anyone has suggestions I would be happy to hear them!
Related
I want to create a NurbsSurface in OpenGL. I use a grid of control points size of 40x48. Besides I create indices in order to determine the order of vertices.
In this way I created my surface of triangles.
Just to avoid misunderstandings. I have
float[] vertices=x1,y1,z1,x2,y2,z2,x3,y3,z3....... and
float[] indices= 1,6,2,7,3,8....
Now I don't want to draw triangles. I would like to interpolate the surface points. I thought about nurbs or B-Splines.
The clue is:
In order to determine the Nurbs algorithms I have to interpolate patch by patch. In my understanding one patch is defined as for example points 1,6,2,7 or 2,7,3,8(Please open the picture).
First of all I created the vertices and indices in order to use a vertexshader.
But actually it would be enough to draw it on the old way. In this case I would determine vertices and indices as follows:
float[] vertices= v1,v2,v3... with v=x,y,z
and
float[] indices= 1,6,2,7,3,8....
In OpenGL, there is a Nurbs function ready to use. glNewNurbsRenderer. So I can render a patch easily.
Unfortunately, I fail at the point, how to stitch the patches together. I found an explanation Teapot example but (maybe I have become obsessed by this) I can't transfer the solution to my case. Can you help?
You have set of control points from which you want to draw surface.
There are two ways you can go about this
Which is described in Teapot example link you have provided.
Calculate the vertices from control points and pass then down the graphics
pipeline with GL_TRIANGLE as topology. Please remember graphics hardware
needs triangulated data in order to draw.
Follow this link which shows how to evaluate vertices from control points
http://www.glprogramming.com/red/chapter12.html
You can prepare path of your control points and use tessellation shaders to
triangulate and stitch those points.
For this you prepare set of control points as patch use GL_PATCH primitive
and pass it to tessellation control shader. In this you will specify what
tessellation level you want. Depending on that your patch will be tessellated
by another fixed function stage known as Primitive Generator.
Then your generated vertices will be pass to tessellation evaluation shader
in which you can fine tune. Here you can specify outer or inner tessellation
level which will further subdivide your patch.
I would suggest you put your VBO and IBO like you have with control points and when drawing use GL_PATCH primitive. Follow below tutorial about how to use tessellation shader to draw nurb surfaces.
Note : Second method I have suggested is kind of tricky and you will have to read lot of research papers.
I think if you dont want to go with modern pipeline then I suggest go with option 1.
In the fiddle below, I've drawn circles at certain points along a recursive tree structure.
https://jsfiddle.net/ypbprzzv/4/
The tree structure itself is a simplified version of the one found here:
https://processing.org/examples/tree.html
What if instead of drawing circles at (0, -h) on transformed and rotated grids, which is where they're being drawn in the fiddle, I wanted to hang pendulums which would hang in the unrotated y direction (down). If the pendulums were instances of an object class, it would be easy to add a new instance instead of (or in addition to) drawing the circle.
void branch(float h) {
h *= 0.6;
if (h > 10) {
pushMatrix();
rotate(a);
line(0, 0, 0, -h);
fill(0, 175, 0, 100);
if (h < 50) {
// I could add a new pendulum here, at (0, -h)
ellipse(0, -h, 5, 5);
}
translate(0, -h);
branch(h);
popMatrix();
} // closing the if statement
} // closing branch function
I have already tried this but because I wanted to keep the code very brief, I have not included it. The pendulums do indeed hang, but in wacky directions, since when I create these instances, the whole grid is xformed and rotated (which needs to be the case to simplify the drawing the tree or other interesting structures).
And suppose I want to make these pendulums sensitive to user interactions. The objects' frames of reference are different from the users'.
So I'll try to summarize the question:
Is it possible to create instances of objects on a transformed and rotated grid, but have that object behave in a prescribed way in relation to the unrotated grid?
Would it be helpful to provide a fiddle including the pendulums?
It's a little bit hard to help with general "how do I do this" or "is it possible" questions like this. The reason that it's hard to answer "is it possible" questions is the answer is almost always yes, it's possible. Similarly, "how do I do this" questions always have about a million possible answers, and there isn't any single best way to approach a problem. The "right" answer is really more dependent on how you think about the problem than anything else. But I'll try to help in a general sense.
Is it possible to create instances of objects on a transformed and rotated grid, but have that object behave in a prescribed way in relation to the unrotated grid?
Yes, it's possible.
There are a number of ways you might approach this:
You might keep track of the current state (current rotation) and then undo that when you draw the pendulum. For example, if you're rotated to 90 degrees, you'd simply rotate by -90 degrees before drawing the pendulum.
You could maybe use the screenX() and screenY() functions to get the screen location of a transformed point. More info can be found in the reference.
You could store positions as you recursively draw the tree, and then after the tree is drawn, you could loop over those points to draw the pendulums.
These are just what I could think of right now, and there are probably more ways to approach it. Again, which approach you choose really depends on how this stuff fits into your brain, which is pretty hard to help you with.
But if you want my two cents: I personally find it hard to "think in transformations" to do stuff like what you're describing. Instead, if I were you, I would refactor the code so it no longer relies on translations and rotations. I'd use basic trig (the sin() and cos() functions) to draw everything. Then you'd already be in screen coordinates, so drawing your pendulums would be much easier.
But again, which approach you take really depends on how you think about things.
I'm trying to find a lightweight way to find nearby objects in three.js.
I have a bunch of cubes, and I want each cube to be able to determine the nearest cubes to it on demand.
Is there a better way to do this than just iterating through all objects and calculating the distance between them? I know the renderer does something similar to what I want when it sorts to find the order to render with, but I'm not getting too far just trying to read the three.js code.
The renderer is doing the same thing you're describing but you may want to use KDTrees in your case.
Have a look at this example:
http://threejs.org/examples/webgl_nearestneighbour.html
I'm using D3 to create a world map with an orthographic projection that the user can "spin" with their mouse like they would a globe.
I ran into some problems with jittery rendering in Firefox so I simplified my map features using an implementation of the Douglas-Peuker Algorithm in R. I dumped this into geoJSON and have it rendered by D3 as in this example: http://jsfiddle.net/cmksA/8/. (Note that the problem I describe below doesn't occur with the non-simplified features, but Firefox is unusable if I don't simplify.)
Performance is still poor (getting better) in Firefox, but a new issue has crept in. When you pan the globe so that Indonesia is roughly in the center of the globe, one of the polygons gets transformed to cover the entire globe. The same issue happens when North and South America are centered.
As part of the panning, I re-project/re-draw the globe using the following function (line 287 of the jsfiddle):
function panglobe(){
var x=d3.event.dx;
var y=d3.event.dy;
var r = mapProj.rotate();
r[0] = r[0]+lonScale(x)
r[1] = r[1]+latScale(y)
mapProj.rotate(r);
countries.attr("d",function(d){
var dee=mapPath(d)
return dee ? dee : "M0,0";
});
}
Any help/insight/advice would be much appreciated. Cheers
A common problem with line-simplification algorithms when applied to polygons is that they can introduce self-intersections, which generally cause havoc with geometry algorithms.
It's quite possible that your simplified polygons contain some self-intersections, e.g. a segment that goes back on itself. This might cause problems for D3, e.g. when sorting intersections along the clip region edge (although in future releases I hope to support self-intersecting polygons, too).
A better algorithm to use might be Visvalingam–Whyatt, e.g. as used by TopoJSON, as it simplifies based on area. However, it can also produce self-intersecting polygons, although perhaps less often than Douglas–Peucker.
For interactive globes I’d recommend world-110m.json from Mike Bostock’s world-atlas.
My code to calculate the minimum translation vector using the Separating Axis Theorem works perfectly well, except when one of the polygons is completely contained by another polygon. I have scoured the internet for the solution to this problem and everyone just seems to ignore it ( http://www.codezealot.org/archives/55#sat-contain talks about this, but doesn't give a full solution...)
The pictures below is a screenshot from my program illustrating the problem. The translucent blue triangle is the position of the rectangle before the MTV is applied, and the other triangle is with the MTV applied.
It seems to me that the link you shared does give a solution to this. In your MTV calculation, you have to test for complete containment in a projection and change the calculations accordingly. (The pseudocode is in reference to figure 9 on that page.) Perhaps if you post your code, we can comment on why it isn't working.