Google Maps Polygon Representation - algorithm

I used google maps to make a project that records flood incident in a certain area. I used polygons to represent those floods, since the project is for planning purposes it is required of us to output all the historical flood data into a single map. My problem is if I simply just output all polygons, it would look messy and cluttered. So I was wondering what method I could use to represent these polygons in a better fashion. We were advised to use heatmaps, but I can't seem to find tutorials on how to make polygons into heatmaps. Any suggestions would be appreciated. Thanks!

To turn a polygon into a heatmap, render the polygons in black with high transparency into a white bitmap. This should result in a grayscale image, which will be darker where many polygons overlap. Then convert the gray values of the bitmap into the hue value of a corresponding semi-transparent color bitmap.
Why did the rendering look messy? Did you try rendering filled polygons with high transparency and no borderline? That should result in areas that are more prone to flooding being more "highlighted".

Related

Is there an algorithm for triangulating contors that overlap

I am trying to triangulate some vector data for rendering SVG graphics on to the screen usingopengl, for data that has contors that don't overlap I can triangulate and render these shapes ok but there are cases where contors do overlap and this is where I am having issues.
I currently use a delaunay triangulation algorithm and the data I am using is true type font data. For reference the character I am looking at is (char)260 of the arial font. I enclose a picture from font forge showing the shape.
I can successfully "fill" this shape in using the winding order so I can display this glyph on a bit map image but I don't want to do this here, I'd like to render the glyph using opengl directly (this works fine for the non overlapping glyphs).
Does anyone know of a triangulation algorithm that can cater for overlapping contors or an algorithm that can remove overlaps?
You need a polygon boolean library like this one (open source) to perform a union operation between all character shapes, then to triangulate the outcome.

Any ideas on how to remove small abandoned pixel in a png using OpenCV or other algorithm?

I got a png image like this:
The blue color is represent transparent. And all the circle is a pixel group. So, I would like to find the biggest one, and remove all the small pixel, which is not group with the biggest one. In this example, the biggest one is red colour circle, and I will retain it. But the green and yellow are to small, so I will remove them. After that, I will have something like this:
Any ideas? Thanks.
If you consider only the size of objects, use the following algorithm: labellize the connex components of the mask image of the objects (all object pixels are white, transparent ones are black). Then compute the areas of the connex components, and filter them. At this step, you have a label map and a list of authorized labels. You can read the label map and overwrite the mask image with setting every pixel to white if it has an authorized label.
OpenCV does not seem to have a labelling function, but cvFloodFill can do the same thing with several calls: for each unlabeled white pixel, call FloodFill with this pixel as marker. Then you can store the result of this step in an array (of the size of the image) by assigning each newly assigned pixel with its label. Repeat this as long as you have unlabellized pixels.
Else you can recode the connex component function for binary images, this algorithm is well known and easy to implement (maybe start with Matlab's bwlabel).
The handiest way to filter objects if you have an a priori knowledge of their size is to use morphological operators. In your case, with opencv, once you've loaded your image (OpenCV supports PNG), you have to do an "openning", that is an erosion followed by a dilation.
The small objects (smaller than the size of the structuring element you chose) will disappear with erosion, while the bigger will remain and be restored with the dilation.
(reference here, cv::morphologyEx).
The shape of the big object might be altered. If you're only doing detection, it is harmless, but if you want your object to avoid transformation you'll need to apply a "top hat" transform.

Google Maps-style quad-tree of materials on a single plane in Three.js – 1x1, 2x2, 4x4 and 8x8

I'm trying and failing to work out how to achieve a quad-tree of materials (images) on a single plane, much like a Google Maps-style zoomable tile that gets more accurate the closer you get.
In short, I want to be able to have a 1x1 image texture (covering a plane that is 256 units wide and tall) that can then be replaced with a 2x2 texture, that can then be replaced with a 4x4 texture, and so on.
Like the image example below…
Ideally, I want to avoid having to create a different plane for each zoom level / number of segments. A perfect solution would allow me to break a single plane into 8x8 segments (highest zoom) and update the number of textures on the fly. So it would start with a 1x1 texture across all 64 (8x8) segments, then change into a 2x2 texture with each texture covering 4x4 segments, and so on.
Unfortunately, I can't work out how to do this. I explored setting the materialIndex for each face but you aren't able to update those after the first render so that wouldn't work. I've tried looking into UV coordinates but I don't understand how it would work in this situation, nor how to actually implement that in Three.js – there is little in the way of documentation / examples for this specific case.
A vertex shader is another option that came up in research, but again I don't know enough to understand how to construct that.
I'd appreciate any and all help with this, it will be a technique that proves valuable for other Three.js users I'm sure.
Not 100% sure what you are trying to do, whether you are talking about texture atlasing (looking up and different textures based on current setting/zooms) but if you are looking for quad-tree based texturing that increases in detail as you zoom in then this is essentially what mipmaping is and does.
(It can be also be used to do all sorts of weird things because of that, but that's another adventure entirely)
Generally mipmapping is automatic based on the filtering you use - however it sounds like you need more control over it.
I created an example hidden away in the three.js source tree which may help:
http://mrdoob.github.com/three.js/examples/webgl_materials_texture_manualmipmap.html
Which shows you how to load each mipmap level in manually, rather than have it just be automatically generated.
HTH

Generating fast color rectangles

I am designing a more powerful color picker for Qt and looking for some advice. How would one go about generating fast real-time color rectangles such as the ones found in Photoshop (for HSB and RGB). I was originally thinking of using QImage and scanline to calculate all the pixels individually but this would probably be too slow.
I was thinking it would be better to write an OpenGL shader. As I can recall you can assign colors to vertices and it would interpolate the changes for you. I just have no idea how this would be done in Qt or if this is even worth the effort.
I am using QGraphicsView to display the rectangle. Any advice would be appreciated.
Ok so looking into QGradients a bit more could you not use multiple QGradient to create the effect you need?
For the last of the 3 examples you could create a single gradient with multiple stops for the colours themselves then overlay this with a QGradient of black (alpha 0) to black (alpha 255) with apropriate stops to get the gradient to come in at the right point.

Can anyone tell me in what other way we can diffuse an image without losing specific parts of an image

The Perona Malik diffusion equation removes noise from the image on the basis of two points:
preferring high contrast edges over low contrast ones.
preferring wide regions over smaller ones
Can anyone tell me in what other way we can diffuse an image without losing specific parts of an image like edges, image content, lines and other details?
There is an anisotropic method that claims to preserve edges while reducing noise. See
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=541427
Diffusion operators by nature remove features, that's how it removes the noise. So there is always some loss associated. But by choosing to diffuse different near edges (either diffuse less, or diffuse tangent to the edge rather than normal), edges can be preserved.
If there are features that cannot be lost, the diffusion operator cannot be applied across them or you run the risk of losing the feature.
More advanced methods are using the Structure Tensor of the image to smooth around edges without blurring them.
Basically, the Structure Tensor gives you information about the local gradient of the image.
Using this info you can smooth "Flat" areas and sharpen areas which are edges.
The Perona Malik is very old and better approaches are available now.
Have a look here:
https://github.com/RoyiAvital/Fast-Anisotropic-Curvature-Preserving-Smoothing

Resources