How do we detect the smooth edges due to anti aliasing property of an image.
I am having n number of PNG images with smooth edges, I want to remove and replace it with background.
I am trying to remove them with some script.
This is a non trivial problem. It is closely related to matting.
Basically, for each antialiased boundary pixel, you need to solve the matting equation:
I=aF+(1-a)B where the pixel value I is a convex combination (a and (1-a)) of the foreground color F and the background color B.
From the image you only know I and you need to calculate a.
Look into the matting algorithms.
There are hundreds of papers on the subject. Here's but a small sample from first search results:
A Global Sampling Method for Alpha Matting
A Closed Form Solution to Natural Image Matting
Poisson Matting
Related
Rencently I'm trying to search for some ways to detect lines in CT scans.I found that all the Hough Transform family and some other algorithms are required to deal with contours born after edge detector.I found the contours are not what I want and a lot of short lines created by these 2 steps.I get perplexed by this.Can any handsome tell me what to do with this?Some methods or algorithms used in grayscale-image straightly but not in binary-image? using opencv or numpy is perfect! Many thanks!
Below is the test picture.I'm working to detect left-top straight lines and filter out the others.
You have pretty consistent background so I would:
detect contours
as any pixel with not background color that is neighboring background color.
Segmentate/label the contour points to form ordered "polylines"
create ID buffer and set ID=0 (background or object pixels)
find any yet not processed contour pixel
if none found stop
flood fill the contour in ID buffer by ID
increment ID
go to 2
now ID buffer contains your labeled contours
for each contour create ordered list of pixels forming contour "polyline"
to speed this up you can remember each contour start point from #2 or even do this step directly in step #2.
detect straight lines in contour "polylines".
that is simple straight lines have similar slope angle between neighboring point. You can also apply regression or whatever ... the slope or unit direction vectors must be computed on pixels that are at least 5 pixels distant to each other otherwise rasterization pixelation will corrupt the results.
see some related stuff:
Efficiently calculating a segmented regression on a large dataset
Given n points on a 2D plane, find the maximum number of points that lie on the same straight line
I'm working on a simple mapping application for fun, and one of the things I need to do is to find (and color) all of the points that are visible from the current location. In this case, points are pixels. My map is a raster image where transparent pixels are open space, and any other pixels are opaque. (There are no semi-transparent pixels; alpha is either 0 or 100%.) In this sense, it's sort of like a regular flood fill, with the constraint that each filled pixel has to have a clear line-of-sight to the origin point. The following image shows a couple of such areas colored in (the tiny crosshairs are the origin points, and white = transparent):
(http://tinyurl.com/nf3nqa4)
In addition, what I am ultimately interested in are the points that "border" other colors, i.e., I want the list of points that make up the edge of the visible region.
My current and very inefficient solution is the modified flood-fill I described above. This approach returns correct results, but due to the need to iterate every pixel on a line to the origin for every pixel in the flood fill, it's very slow. My images are downsized and quantized, but I still need about 1MP for acceptable accuracy, and typical LoS areas are at least 100,000 pixels each.
I may well be using the wrong search terms, but I haven't been able to find any discussion of algorithms that would solve this (rasterized) LoS case.
I suspect that this could be done more efficiently if your "walls" were represented as equations rather than simply pixels in a raster image. For example, polygons/triangles, circles, ellipses.
It would then be like raytracing (search for this term) in 2D. In other words, you could consider the ray/line from each pixel in the image to the point of interest and color the pixel only if it does not intersect with any object.
This method does require you to test the intersection for each pixel in the image with each object; however, if you look up raytracing you will find a number of efficient methods for testing these intersections. They will mostly be for the 3D case but it should be straightforward to convert them to 2D.
There are 3D raytracers that are very fast on MUCH larger images so this should be very doable.
You can try a delaunay triangulation on each color. I mean you can try to find the shape of each color with DT.
Is it possible to get a rectangle distortion from few fixed points?
This example will explain better what I mean:
Suppose I've got this image with a rectangle and two points, the two points are recognized in the other image where the image is distorted
How can I reproduce the distortion knowing the position of the two(or maybe three) previous points??
My purpose is to get the distorted rectangle border. It's not an easy image as the one in the example so I can't just filter colors, I need to find a way to get the distorted image border.
I believe what you're looking for can be described as an affine transform. If you want general transform of a planar surface, you may want perspective transform instead.
You can find the OpenCV implementation here. The relevant functions are cv::getAffineTransform which requires 3 pairs of points or cv::getPerspectiveTransform which requires 4 pairs of points.
Note: if you're using an automatic feature detector/matcher, it would be best to use far more point pairs than the minimum and use a robust outlier rejection algorithm like RANSAC.
shift and rotation need - 2 points
Affine tranform need - 3 points
Perspective tranform need - 4 points
Given a set of 2D points, I want to calculate a measure of how horizontally symmetrical and vertically symmetrical those points are.
Alternatively, for each set of points I will also have a rasterised image of the lines between those points, so is there any way to calculate a measure of symmetry for images?
BTW, this is for use in a feature vector that will be presented to a neural network.
Clarification
The image on the left is 'horizontally' symmetrical. If we imagine a vertical line running down the middle of it, the left and right parts are symmetrical. Likewise, the image on the right is 'vertically' symmetrical, if you imagine a horizontal line running across its center.
What I want is a measure of just how horizontally symmetrical they are, and another of just how vertically symmetrical they are.
This is just a guideline / idea, you'll need to work out the details:
To detect symmetry with respect to horizontal reflection:
reflect the image horizontally
pad the original (unreflected) image horizontally on both sides
compute the correlation of the padded and the reflected images
The position of the maximum in the result of the correlation will give you the location of the axis of symmetry. The value of the maximum will give you a measure of the symmetry, provided you do a suitable normalization first.
This will only work if your images are "symmetric enough", and it works for images only, not sets of points. But you can create an image from a set of points too.
Leonidas J. Guibas from Stanford University talked about it in ETVC'08.
Detection of Symmetries and Repeated Patterns in 3D Point Cloud Data.
Photoshop has a lot of cool artistic filters, and I'd love to understand the underlying algorithms.
One algorithm that's particularly interesting is the Cutout filter (number 2 at the link above).
It has three tunable parameters, Number of Levels, Edge Simplicity, and Edge Fidelity. Number of levels appears to drive a straightforward posterization algorithm, but what the other sliders do technically eludes me.
I would think that they're doing something related to Vornoi diagrams or k-means partitionion, but poking around on wikipedia hasn't resulted in anything that maps obviously to what Photoshop is doing, especially considering how fast the filter renders itself.
Is there any source for technical descriptions of the Photoshop filters? Alternatively, do you have any thoughts about how this particular filter might be implemented?
Edge detection is usually a Sobel or Canny filter then the edges are joined together with a chain code.
Look at something like the OpenCV library for details
Did you see this post. It explains how to get the same result using ImageMagic, and IM is opensource.
Very old question but maybe someone searching for an answer and maybe this helps.
Opencv's findcontours and approxPolyDP functions can do this. But we need to prepare the image before main process.
First; find most used N colors with k-means. For example find 8 colors.Find contours for each color and then calculate contourArea for all colors one by one (We will have N=8 layers). After that draw filled contours after approxPolyDP for each color from biggest ContourArea to smaller with its pre-calculated color.
My another suggestion is eliminate very small contours while calculating contourArea.
Photoshop cutout effects parameters;
Number Of Levels=K-Means-find most used N colors.
Edge Simplicity=I guess gaussian blur or other removing noise filters like bilateral filter or meanshift filter with edge preserving will be useful for this step.This step can be executed after K-Means and before finding contours.
Edge fidelity=openCV's approxPolyDP epsilon parameter.
I'm not sure it could be some kind of cell shading, but it also looks like a median filter with a very big kernel size or which was applied several times.
The edge simplicity/fidelity might be options which help decide whether or not to take in account an adjacent pixel (or one which falls inside the kernel) based on difference of color with the current pixel.
Maybe not exactly what you are looking for, but if you like knowing how filters work, you could check out the source code of GIMP. I can't say if GIMP has an equivalent of cutout filter you mentioned, but it's worth taking a look if you are truly interested in this field.
The number of levels seems to resemble how cell-shading is done and this is how I'd implement that part in this case: you simply take this histogram of the image and divide it into the "No. of levels" amount of sections then calculate an average for each section. Each color in the histogram will then use that average in stead of their original color.
The other two parameters require some more thinking but 'Edge simplicity' seems to detonate the number of segments the shapes are build up off. Or rather: the number of refinements applied to some crude Image Segmentation Algorithms. The fidelity slider seems to do something similar; it probably controls some kind of threshold for when the refinements should take place.
This might help
Got a simple solution, which would theoretically produce something similar to that filter.
Somehow similar to what Ismael C suggested.
Edge Simplicity controls window size. Maybe window should be weighted.
But unlike it happens for regular windowed filters this one would take only a fixed size portion of random pixels from this window. The size of the portion is controlled with Fidelity parameter.
Set the pixel color to the median of the sample.
Given we have some posterization algorithm, it is applied afterwards.
Here we go!
Please report results if you implement it.
PS. I really doubt that segmentation is used at all.
I imagine it's probably some thresholding, edge-detection (Sobel/Canny/Roberts/whatever) and posterisation.
From tinkering with it I've found out that:
it's deterministic
it doesn't do any kind of pixel based posterization to achieve final effect
it probably doesn't use any kind of pixel based edge detection, it seems to work rather with areas then edges.
it calculates the shapes closed polygons to draw (some of the polygon edges might overlap with image edges).
when the edges of polygons are known then color of each area enclosed in edges (not necessarily belonging to one polygon) is colored with average color of pixels of original image that area covers.
edge of polygon can intersect with itself.
Especially visible for high edge simplicity.
as 'line simplicity' drops, the number of polygon edges increases, but also number of polygons increases.
edge fidelity influences line polygon edge count but does not influence polygon count
high edge fidelity (=3) causes single polygon to have very long and very short edges at the same time, low fidelity (=1) causes single polygon to have all edges roughly the similar length
high edge simplicity and low edge fidelity seem to prefer polygons anchored at edges of image, even at cost of sanity.
Altogether it looks like simplified version of Live Trace algorithm from Adobe Illustrator that uses polygons instead of curves.
... or maybe not.