Consider
digraph G {
0[pos="0,0!"]
1[pos="-2,3!"]
2[pos="2,3!"]
0->1[label="0.5"]
0->2[label="0.5"]
1->2[label="0.5"]
}
Under neato, this gives:
In my application, I have coordinates that are 10 times the coordinates here. That is 0 is at coordinates (0,0), 1 at (-20,30) and 2 at (20,30).
With
digraph G {
0[pos="0,0!"]
1[pos="-20,30!"]
2[pos="20,30!"]
0->1[label="0.5"]
0->2[label="0.5"]
1->2[label="0.5"]
}
the rendered graph becomes very "distant" so that the node labels and the arc labels are rendered in very small font.
Is there a way to control the scaling of the graphs so that even with magnification at 10 times (i.e., with coordinates an order of magnitude higher), as long as the relative positions of the nodes are unchanged (i.e., the coordinates are scaled by a constant multiplicative factor), the rendering is visible equally well as before? I could scale the coordinates manually by dividing every coordinate I obtain from my application by 10. Before that, I would like to know it the rendering engine can take care of this for me by itself.
Note: all of the rendering has been done at the online engine: https://dreampuf.github.io/GraphvizOnline/
The scale attribute seems to be what you want (https://graphviz.org/docs/attrs/scale/)
Try neato -s10 -T... myfile.gv
Related
I created an area chart with three.js. Each datapoint creates two triangles, one from bottom to height of the value to the next value height, one to fill the gap. Pretty similiar to the work of gmarland at http://gmarland.github.io/mercer/ (which I found after creating it when researching for a solution for this question, hard luck...).
Not knowing of any option to fill the area with a gradient as a whole, I filled the single triangles with vertexColors. Works, but obviously low values have the same color-gradient as higher ones just at another scale. Creating a nice effect but not visualizing the actual data. So here is the challenge where I can't think of a nice solution yet:
I would like to fill the area with a gradient that reflects the values. I.e. from 0 (yellow) to 100 (blue) and if a value is in between it stops somewhere at orange.
If I'd apply that logic using vertexColors for my triangles, the single triangles would get visible, as they'd have different colors at different heights, so that's not an option.
Any chance to fill the whole mesh (so, area of the chart) with a gradient?
Example of a 2D chart with that "effect": http://users.infragistics.com/2013.2/Ignite/Chart-Gradient.jpg
By the sounds of this i think you want to use a texture. You can generate several THREE.DataTexture all of witdh 1 and height 100. Several to make things simple with filtering. Fill them up with your values and then map them to your triangles using some logic.
Either scaled by the max height of these graphs, or the entire graph (looks like the red represent the peak of the curve, not the ceiling of the graph).
This is very similar to what you are doing with vertex colors, but instead of vertex colors, you need to generate UVs. U can always be 0 for every vertex, V is just the height of the vertex, normalized.
Given you have a bunch of shapes, say like this:
Wondering if there is a real-time algorithm out there (or any suggestions would be helpful too) that can identify that you can approximate that with large circles sort of like this:
It doesn't have to be circles specifically, it can be parameterized to work in different ways. Just wondering how to basically:
Identify a chunk of shapes that can be approximated by a simpler shape.
Overlay that shape on top of the more complex/smaller shapes.
Thank you.
I think of an approach. I would call it Changing Contrast/Brightness as in image edit applications.
you get all the centers of your shapes.
get their volumes.
calculate the weight of each shape (function of distance and volume of each other shape. Wx = F(Di, Vi), where W (Weight), x: your current shape index, D (distance), Di the distance between x and i, V (Volume), Vi : the volume of i.
Have a variable (perhaps a value scroll bar) to change the Brightness.
repeat #4 for Contrast.
Calculate the average (or mean average) of Weight of all shapes.
Increasing the Brightness means decreasing the volume of far shapes (i.e. shapes with weight below the average "low weight").
Increasing the contrast means increase volume of shapes of high weight and decrease low weights.
By changing both contrast and brightness, some shapes will disappear, others will join as one shape, and you will get a simpler shape(s).
I'm trying to come up with an algorithm to optimize the shape of a polygon (or multiple polygons) to maximize the value contained within that shape.
I have data with 3 columns:
X: the location on the x axis
Y: the location on the y axis
Value: Value of the block which can have positive and negative values.
This data is from a regular grid so the spacing between each x and y value is consistent.
I want to create a bounding polygon that maximizes the contained value with the added condition.
There needs to be a minimum radius maintained at all points of the polygon. This means that we will either lose some positive value blocks or gain some negative value blocks.
The current algorithm I'm using does the following
Finds the maximum block value as a starting point (or user defined)
Finds all blocks within the minimum radius and determines if it is a viable point by checking the overall value is positive
Removes all blocks in the minimum search radius from further value calculations and flags them as part of the final shape
Moves onto the next point determined by a spiraling around the original point. (center is always a grid point so moves by deltaX or deltaY)
This appears to be picking up some cells that aren't needed. I'm sure there are shape algorithms out there but I don't have any idea what to look up to find help.
Below is a picture that hopefully helps outline the question. Positive cells are shown in red (negative cells are not shown). The black outline shows the shape my current routine is returning. I believe the left side should be brought in more. The minimum radius is 100m the bottom left black circle is approximately this.
Right now the code is running in R but I will probably move to something else if I can get the algorithm correct.
In response to the unclear vote the problem I am trying to solve without the background or attempted solution is:
"Create a bounding polygon (or polygons) around a series of points to maximize the contained value, while maintaining a minimum radius of curvature along the polygon"
Edit:
Data
I should have included some data it can be found here.
The file is a csv. 4 columns (X,Y,Z [not used], Value), length is ~25k size is 800kb.
Graphical approach
I would approach this graphically. My intuition tells me that the inside points are fully inside the casted circles with min radius r from all of the footprint points nearby. That means if you cast circle from each footprint point with radius r then all points that are inside at least half of all neighboring circles are inside your polygon. To be less vague if you are deeply inside polygon then you got Pi*r^2 such overlapping circles at any pixel. if you are on edge that you got half of them. This is easily computable.
First I need the dataset. As you did provide just jpg file I do not have the vales just the plot. So I handle this problem like a binary image. First I needed to recolor the image to remove jpg color distortions. After that this is my input:
I choose black background to easily apply additive math on image and also I like it more then white and leave the footprint red (maximally saturated). Now the algorithm:
create temp image
It should be the same size and cleared to black (color=0). Handle its pixels like integer counters of overlapping circles.
cast circles
for each red pixel in source image add +1 to each pixel inside the circle with minimal radius r around the same pixel but in the temp image. The result is like this (Blue are the lower bits of my pixelformat):
As r I used r=24 as that is the bottom left circle radius in your example +/-pixel.
select inside pixels only
so recolor temp image. All the pixels with color < 0.5*pi*r^2 recolor to black and the rest to red. The result is like this:
select polygon circumference points only
Just recolor all red pixels near black pixels to some neutral color blue and the rest to black. Result:
Now just polygonize the result. To compare with the input image you can combine them both (I OR them together):
[Notes]
You can play with the min radius or the area treshold property to achieve different behavior. But I think this is pretty close match to your problem.
Here some C++ source code for this:
//picture pic0,pic1;
// pic0 - source
// pic1 - output/temp
int x,y,xx,yy;
const int r=24; // min radius
const int s=float(1.570796*float(r*r)); // half of min radius area
const DWORD c_foot=0x00FF0000; // red
const DWORD c_poly=0x000000FF; // blue
// resize and clear temp image
pic1=pic0;
pic1.clear(0);
// add min radius circle to temp around any footprint pixel found in input image
for (y=r;y<pic1.ys-r;y++)
for (x=r;x<pic1.xs-r;x++)
if (pic0.p[y][x].dd==c_foot)
for (yy=-r;yy<=r;yy++)
for (xx=-r;xx<=r;xx++)
if ((xx*xx)+(yy*yy)<=r*r)
pic1.p[y+yy][x+xx].dd++;
pic1.save("out0.png");
// select only pixels which are inside footprint with min radius (half of area circles are around)
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
if (pic1.p[y][x].dd>=s) pic1.p[y][x].dd=c_foot;
else pic1.p[y][x].dd=0;
pic1.save("out1.png");
// slect only outside pixels
pic1.growfill(c_foot,0,c_poly);
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
if (pic1.p[y][x].dd==c_foot) pic1.p[y][x].dd=0;
pic1.save("out2.png");
pic1|=pic0; // combine in and out images to compare
pic1.save("out3.png");
I use my own picture class for images so some members are:
xs,ys size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
clear(color) - clears entire image
resize(xs,ys) - resizes image to new resolution
[Edit1] I got a small bug in source code
I noticed some edges were too sharp so I check the code and I forgot to add the circle condition while filling so it filled squares instead. I repaired the source code above. I really just added line if ((xx*xx)+(yy*yy)<=r*r). The results are slightly changed so I also updated the images with new results
I played with the inside area coefficient ratio and this one:
const int s=float(0.75*1.570796*float(r*r));
Leads to even better match for you. The smaller it is the more the polygon can overlap outside footprint. Result:
If the solution set must be a union of disks of given radius, I would try a greedy approach. (I suspect that the problem might be intractable - exponential running time - if you want an exact solution.)
For all pixels (your "blocks"), compute the sum of values in the disk around it and take the one with the highest sum. Mark this pixel and adjust the sums of all the pixels that are in its disk by deducing its value, because the marked pixel has been "consumed". Then scan all pixels in contact with it by an edge or a corner, and mark the pixel with the highest sum.
Continue this process until all sums are negative. Then the sum cannot increase anymore.
For an efficient implementation, you will need to keep a list of the border pixels, i.e. the unmarked pixels that are neighbors of a marked pixel. After you have picked the border pixel with the largest sum and marked it, you remove it from the list and recompute the sums for the unmarked pixels inside its disk; you also add the unmarked pixels that touch it.
On the picture, the pixels are marked in blue and the border pixels in green. The highlighted pixels are
the one that gets marked,
the ones for which the sum needs to be recomputed.
The computing time will be proportional to the area of the image times the area of a disk (for the initial computation of the sums), plus the area of the shape times the area of a disk (for the updates of the sums), plus the total of the lengths of the successive perimeters of the shape while it grows (to find the largest sum). [As the latter terms might be costly - on the order of the product of the area of the shape by its perimeter length -, it is advisable to use a heap data structure, which will reduce the sum of the lengths to the sum of their logarithm.]
I am trying to draw a area graph with a gradient. This is what I have right now.
If you look at the red-green graph, you will notice the gradient is does not look the way its supposed to.
EDIT: The gradient should be uniform like this:
I am using OpenGL ES 2.0 and GLKit to draw a bunch of charts. The chart is drawn using GL_TRIANGLES. I understand that the issue is that the gradient is being drawn for each triangle individually.
The only approach I can think of is to use a stencil buffer. I will draw the gradient in a big rectangle and clip it to this shape using the stencil. Is there a better way to do this? If not could you help me draw a stencil with specified points? I am new to OpenGL and not getting a good explanation on using stencil buffer.
You don't need a stencil buffer. I don't think more triangles will help, either — more likely that'd just cause you more confusion because you'd be assigning per-vertex colors to intermediate vertices and having to interpolate them yourself.
Your gradients are coming out that way because of how and where you assign vertex colors for interpolation. Notice the difference in colors between your output and the example of what you're looking for:
You've got 100% red at every vertex along the top edge of your graph, and 100% green at every vertex along the bottom edge. OpenGL interpolates colors linearly across the face of each triangle, which is why you've got more red in the shorter parts of your graph.
In the output you're looking for, the top of the graph starts out less red in the shorter parts, so that it makes a shorter transition to white in over shorter distance.
There are a few different ways to do this, but probably the easiest (for your plan of using GLKBaseEffect instead of writing your own shaders) might be to use a 1D texture for your gradient, and assign a texture coordinate to each vertex that's proportional to its Y coordinate on the graph, like so:
(The example coordinates in my diagram assume your graph vertices cover the range 0.0 to 1.0, but the point stands regardless: the vertical texture coordinate for each point should be a fraction of the graph's total height, between 0.0 and 1.0.)
Alternatively, you could look into drawing in two passes: First, draw the shape of your graph, then draw a quad (two triangles) covering the entire screen with your gradient, using the appropriate glBlendFunc so that it only draws over the area you've filled in with your graph shape.
OpenGL ES can do what you want but you need to increase the tessellation of your model. In other words, instead of using just a few large triangles, you need more and smaller triangles, with the vertex color changes spread over them evenly. This will give you better control over the gradients. Triangles are cheap on accelerated OpenGL ES, so even if you increase the number 100 times, it will not have much impact on performance.
You might also consider a different approach, where the entire graph is covered by a single texture which contains the gradient. That would be easier to implement.
For a game, I'm drawing dense clusters of several thousand randomly-distributed circles with varying radii, defined by a sequence of (x,y,r) triples. Here's an example image consisting of 14,000 circles:
I have some dynamic effects in mind, such as merging clusters, but for this to be possible I'll need to redraw all the circles every frame.
Many (maybe 80-90%) of the circles that are drawn are covered over by subsequent draws. Therefore I suspect that with preprocessing I can significantly speed up my draw loop by eliminating covered circles. Is there an algorithm that can identify them with reasonable efficiency?
I can tolerate a fairly large number of false negatives (ie draw some circles that are actually covered), as long as it's not so many that drawing efficiency suffers. I can also tolerate false positives as long as they're almost positive (eg remove some circles that are only 99% covered). I'm also amenable to changes in the way the circles are distributed, as long as it still looks okay.
This kind of culling is essentially what hidden surface algorithms (HSAs) do - especially the variety called "object space". In your case the sorted order of the circles gives them an effective constant depth coordinate. The fact that it's constant simplifies the problem.
A classical reference on HSA's is here. I'd give it a read for ideas.
An idea inspired by this thinking is to consider each circle with a "sweep line" algorithm, say a horizontal line moving from top to bottom. The sweep line contains the set of circles that it's touching. Initialize by sorting the input list of the circles by top coordinate.
The sweep advances in "events", which are the top and bottom coordinates of each circle. When a top is reached, add the circle to the sweep. When its bottom occurs, remove it (unless it's already gone as described below). As a new circle enters the sweep, consider it against the circles already there. You can keep events in a max (y-coordinate) heap, adding them lazily as needed: the next input circle's top coordinate plus all the scan line circles' bottom coordinates.
A new circle entering the sweep can do any or all of 3 things.
Obscure circles in the sweep with greater depth. (Since we are identifying circles not to draw, the conservative side of this decision is to use the biggest included axis-aligned box (BIALB) of the new circle to record the obscured area for each existing deeper circle.)
Be obscured by other circles with lesser depth. (Here the conservative way is to use the BIALB of each other relevant circle to record the obscured area of the new circle.)
Have areas that are not obscured.
The obscured area of each circle must be maintained (it will generally grow as more circles are processed) until the scan line reaches its bottom. If at any time the obscured area covers the entire circle, it can be deleted and never drawn.
The more detailed the recording of the obscured area is, the better the algorithm will work. A union of rectangular regions is one possibility (see Android's Region code for example). A single rectangle is another, though this is likely to cause many false positives.
Similarly a fast data structure for finding the possibly obscuring and obscured circles in the scan line is also needed. An interval tree containing the BIALBs is likely to be good.
Note that in practice algorithms like this only produce a win if the number of primitives is huge because fast graphics hardware is so ... fast.
Based on the example image you provided, it seems your circles have a near-constant radius. If their radius cannot be lower than a significant number of pixels, you could take advantage of the simple geometry of circles to try an image-space approach.
Imagine you divide your rendering surface in a grid of squares so that the smallest rendered circle can fit into the grid like this:
the circle radius is sqrt(10) grid units and covers at least 21 squares, so if you mark the squares entirely overlapped by any circle as already painted, you will have eliminated approximately 21/10pi fraction of the circle surface, that is about 2/3.
You can get some ideas of optimal circle coverage by squares here
The culling process would look a bit like a reverse-painter algorithm:
For each circle from closest to farthest
if all squares overlapped (even partially) by the circle are painted
eliminate the circle
else
paint the squares totally overlapped by the circle
You could also 'cheat' by painting grid squares not entirely covered by a given circle (or eliminating circles that overflow slightly from the already painted surface), increasing the number of eliminated circles at the cost of some false positives.
You can then render the remaining circles with a Z-buffer algorithm (i.e. let the GPU do the rest of the work).
CPU-based approach
This assumes you implement the grid as a memory bitmap, with no help from the GPU.
To determine the squares to be painted, you can use precomputed patterns based on the distance of the circle center relative to the grid (the red crosses in the example images) and the actual circle radius.
If the relative variations of diameter are small enough, you can define a two dimensional table of patterns indexed by circle radius and distance of the center from the nearest grid point.
Once you've retrieved the proper pattern, you can apply it to the appropriate location by using simple symmetries.
The same principle can be used for checking if a circle fits into an already painted surface.
GPU-based approach
It's been a long time since I worked with computer graphics, but if the current state of the art allows, you could let the GPU do the drawing for you.
Painting the grid would be achieved by rendering each circle scaled to fit the grid
Checking elimination would require to read the value of all pixels containing the circle (scaled to grid dimensions).
Efficiency
There should be some sweet spot for the grid dimension. A denser grid will cover a higher percentage of the circles surface and thus eliminate more circles (less false negatives), but the computation cost will grow in o(1/grid_step²).
Of course, if the rendered circles can shrink to about 1 pixel diameter, you could as well dump the whole algorithm and let the GPU do the work. But the efficiency compared with the GPU pixel-based approach grows as the square of the grid step.
Using the grid in my example, you could probably expect about 1/3 false negatives for a completely random set of circles.
For your picture, which seems to define volumes, 2/3 of the foreground circles and (nearly) all of the backward ones should be eliminated. Culling more than 80% of the circles might be worth the effort.
All this being said, it is not easy to beat a GPU in a brute-force computation contest, so I have only the vaguest idea of the actual performance gain you could expect. Could be fun to try, though.
Here's a simple algorithm off the top of my head:
Insert the N circles into a quadtree (bottom circle first)
For each pixel, use the the quadtree to determine the top-most circle (if it exists)
Fill in the pixel with the color of the circle
By adding a circle, I mean add the center of the circle to the quadtree. This creates 4 children to a leaf node. Store the circle in that leaf node (which is now no longer a leaf). Thus each non-leaf node corresponds to a circle.
To determine the top-most circle, traverse the quadtree, testing each node along the way if the pixel intersects the circle at that node. The top-most circle is the one deepest down the tree that intersects the pixel.
This should take O(M log N) time (if the circles are distributed nicely) where M is the number of pixels and N is the number of circles. Worse case scenario is still O(MN) if the tree is degenerate.
Pseudocode:
quadtree T
for each circle c
add(T,c)
for each pixel p
draw color of top_circle(T,p)
def add(quadtree T, circle c)
if leaf(T)
append four children to T, split along center(c)
T.circle = c
else
quadtree U = child of T containing center(c)
add(U,c)
def top_circle(quadtree T, pixel p)
if not leaf(T)
if intersects(T.circle, p)
c = T.circle
quadtree U = child of T containing p
c = top_circle(U,p) if not null
return c
If a circle is completely inside another circle, then it must follow that the distance between their centres plus the radius of the smaller circle is at most the radius of the larger circle (Draw it out for yourself to see!). Therefore, you can check:
float distanceBetweenCentres = sqrt((topCircle.centre.x - bottomCircle.centre.x) * (topCircle.centre.x - bottomCircle.centre.x) + (topCircle.centre.y - bottomCircle.centre.y) * (topCircle.centre.y - bottomCircle.centre.y));
if((bottomCircle.radius + distanceBetweenCentres) <= topCircle.radius){
// The bottom circle is covered by the top circle.
}
To improve the speed of the computation, you can first check if the top circle has a larger radius that the bottom circle, as if it doesn't, it can't possibly cover the bottom circle. Hope that helps!
You don't mention a Z component, so I assume they are in Z order in your list and drawn back-to-front (i.e., painter algorithm).
As the previous posters said, this is an occlusion culling exercise.
In addition to the object space algorithms mentioned, I'd also investigate screen-space algorithms such as Hierarchical Z-Buffer. You don't even need z values, just bitflags indicating if something is there or not.
See: http://www.gamasutra.com/view/feature/131801/occlusion_culling_algorithms.php?print=1