I'm doing a forward image warping with control points but as expected from any forward warping, target coordinates are non integers so it creates holes on the target image. Since the transform in warping is different for every pixel in the source image, i couldn't find how can I calculate inverse transform for every pixel in the destination image to do this as a backward warping and use bilinear interpolation.
So in short my questions are:
1 - Is this possible to calculate inverse transform for each destination pixel to do this as a backward warping?
2 - If I am forced to do it as a forward warping how I can take care of the holes in target image. Simply using nearest integers creates holes, distributing the same color to neighboring pixels take care of the problem but with heavy aliasing so i believe there is a better way to do this.
Any help is appreciated. Thanks.
You need to do bilinear filtering two times. First one to distribute the transform to pixels around the target pixel when the the target coordinate during forward warping is non integer. Then using these distributed transforms as inverse you'll do a backward warping and use bilinear interpolation again when sampling the pixels from the source
Related
I'm working on a simple mapping application for fun, and one of the things I need to do is to find (and color) all of the points that are visible from the current location. In this case, points are pixels. My map is a raster image where transparent pixels are open space, and any other pixels are opaque. (There are no semi-transparent pixels; alpha is either 0 or 100%.) In this sense, it's sort of like a regular flood fill, with the constraint that each filled pixel has to have a clear line-of-sight to the origin point. The following image shows a couple of such areas colored in (the tiny crosshairs are the origin points, and white = transparent):
(http://tinyurl.com/nf3nqa4)
In addition, what I am ultimately interested in are the points that "border" other colors, i.e., I want the list of points that make up the edge of the visible region.
My current and very inefficient solution is the modified flood-fill I described above. This approach returns correct results, but due to the need to iterate every pixel on a line to the origin for every pixel in the flood fill, it's very slow. My images are downsized and quantized, but I still need about 1MP for acceptable accuracy, and typical LoS areas are at least 100,000 pixels each.
I may well be using the wrong search terms, but I haven't been able to find any discussion of algorithms that would solve this (rasterized) LoS case.
I suspect that this could be done more efficiently if your "walls" were represented as equations rather than simply pixels in a raster image. For example, polygons/triangles, circles, ellipses.
It would then be like raytracing (search for this term) in 2D. In other words, you could consider the ray/line from each pixel in the image to the point of interest and color the pixel only if it does not intersect with any object.
This method does require you to test the intersection for each pixel in the image with each object; however, if you look up raytracing you will find a number of efficient methods for testing these intersections. They will mostly be for the 3D case but it should be straightforward to convert them to 2D.
There are 3D raytracers that are very fast on MUCH larger images so this should be very doable.
You can try a delaunay triangulation on each color. I mean you can try to find the shape of each color with DT.
I have large matrix (image) and a small template. I would like to convolve the small matrix with the larger matrix. For example, the blue region is the section that I want to be used for convolution. In other words, I can use the convolution for all of the image, but since the CPU time is increased, therefore, I would like to just focus on the desired blue part.
Is there any command in MATLAB that can be used for this convolution? Or, how I can force the convolution function to just use that specific irregular section for convolution.
I doubt you can do an irregular shape (fast convolution is done with 2D FFT, which would require a square region). You could optimize it by finding the shape's bounding box and thus discarding the empty border.
#Nicole i would go for the fft2(im).*fft(smallIm) which is the equivalent for conv2(im,smallIm).
as far as recognizing the irregular shape you can use edge detection like canny and find the values of the most (left,right,top,bottom) dots, since canny returns a binary (1,0) image and prepare a bounding box, using the values. however this will take some time to create. and i'm not sure about how much faster will this be.
Is it possible to get a rectangle distortion from few fixed points?
This example will explain better what I mean:
Suppose I've got this image with a rectangle and two points, the two points are recognized in the other image where the image is distorted
How can I reproduce the distortion knowing the position of the two(or maybe three) previous points??
My purpose is to get the distorted rectangle border. It's not an easy image as the one in the example so I can't just filter colors, I need to find a way to get the distorted image border.
I believe what you're looking for can be described as an affine transform. If you want general transform of a planar surface, you may want perspective transform instead.
You can find the OpenCV implementation here. The relevant functions are cv::getAffineTransform which requires 3 pairs of points or cv::getPerspectiveTransform which requires 4 pairs of points.
Note: if you're using an automatic feature detector/matcher, it would be best to use far more point pairs than the minimum and use a robust outlier rejection algorithm like RANSAC.
shift and rotation need - 2 points
Affine tranform need - 3 points
Perspective tranform need - 4 points
Given a set of 2D points, I want to calculate a measure of how horizontally symmetrical and vertically symmetrical those points are.
Alternatively, for each set of points I will also have a rasterised image of the lines between those points, so is there any way to calculate a measure of symmetry for images?
BTW, this is for use in a feature vector that will be presented to a neural network.
Clarification
The image on the left is 'horizontally' symmetrical. If we imagine a vertical line running down the middle of it, the left and right parts are symmetrical. Likewise, the image on the right is 'vertically' symmetrical, if you imagine a horizontal line running across its center.
What I want is a measure of just how horizontally symmetrical they are, and another of just how vertically symmetrical they are.
This is just a guideline / idea, you'll need to work out the details:
To detect symmetry with respect to horizontal reflection:
reflect the image horizontally
pad the original (unreflected) image horizontally on both sides
compute the correlation of the padded and the reflected images
The position of the maximum in the result of the correlation will give you the location of the axis of symmetry. The value of the maximum will give you a measure of the symmetry, provided you do a suitable normalization first.
This will only work if your images are "symmetric enough", and it works for images only, not sets of points. But you can create an image from a set of points too.
Leonidas J. Guibas from Stanford University talked about it in ETVC'08.
Detection of Symmetries and Repeated Patterns in 3D Point Cloud Data.
I am looking for an algorithm that takes vector image data (e.g. sets of edges) and interpolate another set of edges which is the "average" of the two (or more) sets.
To put it in another way, it is just like Adobe Flash where you "tween" two vector images and the software automatically computes the in-between images. Therefore you only specify the starting image and end image, then Flash takes care of all the in-between images.
Is there any established algorithm to do this? Especially in cases like different number of edges?
What exactly do you mean by edges? Are we talking about smooth vector graphics that use curves?
Well a basic strategy would be to simply do a linear interpolation on the points and directions of your control polygon.
Basically you could simply take two corresponding points (one of each curve/vector form) and interpolate them with:
x(t) = (1-t)*p1 + t*p2 with t in [0,1]
(t=0.5 would then of course give you the average between the two)
Since vector graphics usually use curves you'd need to do the same with the direction vector of each control point to get the direction vector of the averaged curve.
One big problem though is to match the right points of each control polygon, especially if both curves have a different degree. You could try doing a degree elevation on one to match the degree of the other and then one by one assign them to each other and interpolate.
Maybe that helps...