Inpainting in multiple channels - scikit-image

I have a 3d matrix A=[mXnXl], which I want to inpaint, using a mask of mask=[mXn].
So each slice along the "l" is a 2D image (0-255 RGB range). I care about continuity along that axis as also along the 3rd dimenbtison.
I use the inpainting with the two following forms
im1=inpaint.inpaint_biharmonic(np.uint8(A), np.uint8(mask), multichannel=True)
for i in range(0,l):
im2[:,:,i]=inpaint.inpaint_biharmonic(np.uint8(A[:,:,i]), np.uint8(mask), multichannel=False)
How is the 3rd dimension handled in the algorithm? Will they produce the same results?

You can look at the source code of the function here:
https://github.com/scikit-image/scikit-image/blob/c221d982e493a1e39881feb5510bd26659a89a3f/skimage/restoration/inpaint.py#L76
As you can see in the for-loop in that function, the function is just doing the same thing as your for-loop, so the results should be identical.

Related

Image Recognition - Classifying an image in an image (i.e classify an object based on surrounding objects)?

I'm kind of new to this image classification stuff so this is a somewhat high-level question. I was wondering if it's possible to train an image classifier (i.e using just TF/Keras or one of the many image recognition libraries and APIs) to identify whether an object is in an object. For example:
Output: A square
Output: A circle
Output: A circle in a square
Output: A square in a circle in a square
Output: A square in a circle and a square in a square
...and so on
If it's possible, what's the best way to go about it? Do I have to train the model to recognize all the variations example by example (which is unfavorable as there are far too many potential examples), or is there some better way? Thanks :)
You can do it by using simpler computer vision techniques instead of going for machine learning.
For example, if you use OpenCV, it has an inbuilt function called findContours, which returns a hierarchy.
Example:
The matrix on top shows how each shape is related to other, according to -
[Next, Previous, First_Child, Parent]
For instance, contours 2 and 4 (circle and rectangle) are at the same level. Hence in the matrix, the next of the second row is 4. You can construct a tree like this to get the output as you desired. You just need to make sure that the inner and outer contours of single shape are not counted as two separate ones which I didn't do here so it shows 5,7 in the output.

MATLAB: Set points within plot polygon equal to zero

I am currently doing some seismic modelling and processing in MATLAB, and would like to come up with an easy way of muting parts of various datasets. If I plot the frequency-wavenumber spectrum of some of my data, for instance, I obtain the following result:
Now, say that I want to mute some of the data present here. I could of course attempt to run through the entire matrix represented here and specify a threshold value where everything above said value should be set equal to zero, but this will be very difficult and time-consuming when I later will work with more complicated fk-spectra. I recently learned that MATLAB has an inbuilt function called impoly which allows me to interactively draw a polygon in plots. So say I, for instance, draw the following polygon in my plot with the impoly-function:
Is there anything I can do now to set all points within this polygon equal to zero? After defining the polygon as illustrated above I haven't found out how to proceed in order to mute the information contained in the polygon, so if anybody can give me some help here, then i would greatly appreciate it!
Yes, you can use the createMask function that's part of the impoly interface once you delineate the polygon in your figure. Once you use create this mask, you can use the mask to index into your data and set the right regions to zero.
Here's a quick example using the pout.tif image in MATLAB:
im = imread('pout.tif');
figure; imshow(im);
h = impoly;
I get this figure and I draw a polygon inside this image:
Now, use the createMask function with the handle to the impoly call to create a binary mask that encapsulates this polygon:
mask = createMask(h);
I get this mask:
imshow(mask);
You can then use this mask to index into your data and set the right regions to 0. First make a copy of the original data then set the data accordingly.
im_zero = im;
im_zero(mask) = 0;
I now get this:
imshow(im_zero);
Note that this only applies to single channel (2D) data. If you want to apply this to multi-channel (3D) data, then perhaps a multiplication channel-wise with the opposite of the mask may be prudent.
Something like this:
im_zero = bsxfun(#times, im, cast(~mask, class(im)));
The above code takes the opposite of the polygon mask, converts it into the same class as the original input im, then performs an element-wise multiplication of this mask with each channel of the input separately. The result will zero each spatial location that's defined in the mask over all channels.

Resizing in MATLAB w/ different filters

I have an image. I want to resize it to double the original size, filling in the new pixels by interpolation. I need to specify which type of interpolation I want to use.
I see the imresize function, which has an option for 'method'. The problem is, there are only 3 options: nearest, bilinear, bicubic. Bilinear and bicubic are averaging/mean methods, but is there any way to set the neighborhood size / weighting?
The main problem is, I need to do it with a 'median' interpolation method, instead of mean. How can I tell it to use this method?
The way that IMRESIZE implements interpolation is by calculating for each pixel in the output image (inverse mapping), the indices of the pixels in the input image that are going to be involved in the interpolation, along with the contributing weights.
The neighborhood and the weights are determined by the type of the interpolation kernel used, which as #Albert points out, can be passed along to the IMRESIZE function (the 'Method' property can accept {f,w} a cell array with the kernel function and the kernel width)
These two components will be used to compute linear combination of the input pixels involved to fill each value of the output pixels. This process is performed along each dimension separately one-at-a-time (vertically then horizontally).
Now the problem for you is that you can never obtain the median value by using a linear combination, that's because median is a non-linear ordering filter. So your only option is to write your own implementation...
Amro is right that the median filter cannot be computed as a weighted response. But MATLAB has a specific function for the median filter: medfilt2.
imresize has a third way of passing the interpolation method: "Two-element Cell Array Specifying Interpolation Kernel". You can read more about it in Matlab's documentation.

Creating Neutral network for line detection - is it possible?

So I want to detect lines on grayscale images. I have a lot of data 9x9 matrices of pixel ints 1 to 256 and 1*4 matrices of ponnts coords X ,Y, X,Y We have 1 line per 9x9 image or non lines. So what structure should have my NN?
Assuming that you're using the most common variety of neural networks, multillayer perceptrons, you'll have exactly as many input nodes as there are features.
The inputs may include transformed variables, in addition to the raw variables. The number of hidden nodes is selected by you, but you should have enough to permit the neural network to adequately make the mapping.
The number of output nodes will be determined by the number of classes and the representation you choose. Assuming two classes ("line", "not line" seems likely), you may use 1 output node, which indicates the estimated probability of one class (the probability of the remaining class being 1 minus the probability of the first class).
Detecting simple lines on a grayscale image is a well known problem. A Hough transform would be suffice for the job. See http://opencv.willowgarage.com/documentation/cpp/imgproc_feature_detection.html?highlight=hough%20line#cv-houghlines for a function that implement finding lines using Hough Transform.
Can you try the above function and see if it works?
If it doesn't, please update your question with a sample image.

How to extract a linear slice from an image in OpenCV / EMGU

I have an image and two points,
and I want to read the pixels between these two points,
and resample them into a small 1x40 array.
I'm using EMGU which is a C# wrapper for OpenCV.
thanks,
SW
What you are looking for is Bresenham's line algorithm. It will allow you to get the points in the pixel array that best approximate a straight line. The Wikipedia link also contains psuedo code to get you started.
Emgu CV includes method in the Image class for sampling color along a line called Sample.
Refer to the manual for the definition. Here's the link to Image.Sample in version 2.3.0.
You will still have to re-sample/interpolate the points in array returned from Sample to end up with a 40 element array. Since there are a number of ways to re-sample, I'll suggest you look to other questions for that.
Rotate and crop
I'd first try to do it like this:
calculate rotation matrix with (GetRotationMatrix2D)
warp the image so that this line is horisontal (WarpAffine)
calculate new positions of two of your points (you can use Transform)
get image rectangle of suitable width and 1 px high (GetRectSubPix)
Interpolation here and there may affect the results, but you have to interpolate anyway. You may consider cropping the image before rotation.
Iterate over the 8-connected pixels of the line
Otherwise you may use the line iterator to iterate over the pixels between two points. See documentation for InitLineIterator (Sorry, the link is to the Python version of OpenCV, I've never heard of EMGU). I suppose that in this case you iterate over pixels of a line which was not antialiased. But this should be much faster.
Interpolate manually
Finally, you may convert the image to an array, calculate which elements the line is passing through and subsample and interpolate manually.

Resources